status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,150 | ["src/Processors/QueryPlan/Optimizations/optimizeUseAggregateProjection.cpp", "tests/queries/0_stateless/02725_agg_projection_resprect_PK.reference", "tests/queries/0_stateless/02725_agg_projection_resprect_PK.sql"] | Projection makes things worse when aggregation matches but filter condition can't be applied to primary key | ```
CREATE TABLE t0
(
`c1` Int64,
`c2` Int64,
`c3` Int64,
`c4` Int64,
PROJECTION p1
(
SELECT
c1,
c2,
sum(c4)
GROUP BY
c2,
c1
)
)
ENGINE = MergeTree
ORDER BY (c1, c2);
INSERT INTO t0 SELECT
number,
rand(1),
rand(2),
rand(3)
FROM numbers_mt(10000000);
```
```
SELECT
c1,
sum(c4)
FROM t0
WHERE c1 = 100
GROUP BY c1
SETTINGS allow_experimental_projection_optimization = 0
Query id: b7e8c17b-069a-4213-a455-2e452e7e699a
βββc1ββ¬ββββsum(c4)ββ
β 100 β 3027309634 β
βββββββ΄βββββββββββββ
1 row in set. Elapsed: 0.004 sec. Processed 8.19 thousand rows, 66.34 KB (2.28 million rows/s., 18.47 MB/s.)
ip-172-31-38-198.us-east-2.compute.internal :) SELECT c1, sum(c4) FROM t0 WHERE c1=100 GROUP BY c1 SETTINGS allow_experimental_projection_optimization=1
SELECT
c1,
sum(c4)
FROM t0
WHERE c1 = 100
GROUP BY c1
SETTINGS allow_experimental_projection_optimization = 1
Query id: f0a793c5-5fed-4b6c-91ef-9b7e336948e1
βββc1ββ¬ββββsum(c4)ββ
β 100 β 3027309634 β
βββββββ΄βββββββββββββ
1 row in set. Elapsed: 0.040 sec. Processed 10.00 million rows, 302.36 MB (251.43 million rows/s., 7.60 GB/s.)
```
```
EXPLAIN
SELECT
c1,
sum(c4)
FROM t0
WHERE c1 = 100
GROUP BY c1
Query id: 3e90cd34-79b8-4a29-a893-49d85811e0ab
ββexplainββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY)) β
β Aggregating β
β Filter β
β ReadFromMergeTree (p1) β
βββββββββββββββββββββββββββββββββββββββββββββββ
4 rows in set. Elapsed: 0.004 sec.
```
p1 is used and we read all the data from projection instead of couple of granules and aggregate them. | https://github.com/ClickHouse/ClickHouse/issues/49150 | https://github.com/ClickHouse/ClickHouse/pull/49417 | 7a0fedb0692daf24313a0eb8743fffa2a2ac4fcc | df9f00e87c08fa385ea6650d0b5480954e76c079 | "2023-04-25T15:10:24Z" | c++ | "2023-05-04T10:52:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,143 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/makeDate.cpp", "tests/queries/0_stateless/02242_make_date.reference", "tests/queries/0_stateless/02242_make_date.sql", "tests/queries/0_stateless/02242_make_date_mysql.reference", "tests/queries/0_stateless/02242_make_date_mysql.sql", "tests/queries/0_stateless/02243_make_date32.sql", "tests/queries/0_stateless/02243_make_date32_mysql.reference", "tests/queries/0_stateless/02243_make_date32_mysql.sql"] | MySQL compatibility: MAKEDATE function support | **Use case**
This is how QuickSight renders count query with grouping by quarter via MySQL:
```
SELECT MAKEDATE(YEAR(`created`), 1) + INTERVAL QUARTER(`created`) QUARTER - INTERVAL 1 QUARTER AS `f4245df6-e4eb-428b-9181-8644f65ccaec.created_tg`, COUNT(*) AS `count` FROM `cell_towers` GROUP BY MAKEDATE(YEAR(`created`), 1) + INTERVAL QUARTER(`created`) QUARTER - INTERVAL 1 QUARTER ORDER BY MAKEDATE(YEAR(`created`), 1) + INTERVAL QUARTER(`created`) QUARTER - INTERVAL 1 QUARTER DESC LIMIT 2500;
```
Unfortunately, this fails because the [MAKEDATE](https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_makedate) function is not supported
```
<Error> executeQuery: Code: 46. DB::Exception: Unknown function MAKEDATE. Maybe you meant: ['makeDate','makeDate32']: While processing (MAKEDATE(toYear(created), 1) + toIntervalQuarter(toQuarter(created))) - toIntervalQuarter(1). (UNKNOWN_FUNCTION) (version 23.4.1.1157 (official build)) (from 35.158.127.201:2813) (in query: /* QuickSight 35ce2892-29a1-4b67-9091-94d1977cee7a
{"partner":"QuickSight","entityId":"edcf57ec-4f0c-4b8d-8d7c-aab0cd071809","sheetId":"d4027bfe-96ab-4ba3-80ef-2452f9b7b49e","visualId":"2927351e-9c0b-4a32-a97e-0469bf69e070"} */
SELECT MAKEDATE(YEAR(`created`), 1) + INTERVAL QUARTER(`created`) QUARTER - INTERVAL 1 QUARTER AS `f4245df6-e4eb-428b-9181-8644f65ccaec.created_tg`, COUNT(*) AS `count` FROM `cell_towers` GROUP BY MAKEDATE(YEAR(`created`), 1) + INTERVAL QUARTER(`created`) QUARTER - INTERVAL 1 QUARTER ORDER BY MAKEDATE(YEAR(`created`), 1) + INTERVAL QUARTER(`created`) QUARTER - INTERVAL 1 QUARTER DESC LIMIT 2500), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe31fa95 in /usr/bin/clickhouse
1. ? @ 0x126d4f40 in /usr/bin/clickhouse
2. DB::FunctionFactory::getImpl(String const&, std::shared_ptr<DB::Context const>) const @ 0x126d46eb in /usr/bin/clickhouse
3. DB::FunctionFactory::get(String const&, std::shared_ptr<DB::Context const>) const @ 0x126d518a in /usr/bin/clickhouse
4. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x1306cd14 in /usr/bin/clickhouse
5. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x1306d949 in /usr/bin/clickhouse
6. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x1306d949 in /usr/bin/clickhouse
7. ? @ 0x13061355 in /usr/bin/clickhouse
8. DB::ExpressionAnalyzer::getRootActionsNoMakeSet(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::ActionsDAG>&, bool) @ 0x1303ca3d in /usr/bin/clickhouse
9. DB::ExpressionAnalyzer::analyzeAggregation(std::shared_ptr<DB::ActionsDAG>&) @ 0x1303abb7 in /usr/bin/clickhouse
10. DB::ExpressionAnalyzer::ExpressionAnalyzer(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::TreeRewriterResult const> const&, std::shared_ptr<DB::Context const>, unsigned long, bool, bool, std::shared_ptr<DB::PreparedSets>, bool) @ 0x130391b6 in /usr/bin/clickhouse
11. ? @ 0x13a00a33 in /usr/bin/clickhouse
12. ? @ 0x13a093e6 in /usr/bin/clickhouse
13. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x13a040d1 in /usr/bin/clickhouse
14. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::shared_ptr<DB::IAST> const&, std::vector<String, std::allocator<String>> const&) @ 0x13aa42a2 in /usr/bin/clickhouse
15. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x13aa1f13 in /usr/bin/clickhouse
16. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x139bb290 in /usr/bin/clickhouse
17. ? @ 0x13dbf9a2 in /usr/bin/clickhouse
18. ? @ 0x13dc612c in /usr/bin/clickhouse
19. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x14b68d45 in /usr/bin/clickhouse
20. DB::MySQLHandler::run() @ 0x14b65754 in /usr/bin/clickhouse
21. Poco::Net::TCPServerConnection::start() @ 0x17aeb0d4 in /usr/bin/clickhouse
22. Poco::Net::TCPServerDispatcher::run() @ 0x17aec2fb in /usr/bin/clickhouse
23. Poco::PooledThread::run() @ 0x17c6a7a7 in /usr/bin/clickhouse
24. Poco::ThreadImpl::runnableEntry(void*) @ 0x17c681dd in /usr/bin/clickhouse
25. ? @ 0x7f3f3b701609 in ?
26. __clone @ 0x7f3f3b626133 in ?
2023.04.25 13:11:27.897575 [ 319 ] {} <Error> MySQLHandler: MySQLHandler: Cannot read packet: : Code: 46. DB::Exception: Unknown function MAKEDATE. Maybe you meant: ['makeDate','makeDate32']: While processing (MAKEDATE(toYear(created), 1) + toIntervalQuarter(toQuarter(created))) - toIntervalQuarter(1). (UNKNOWN_FUNCTION), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe31fa95 in /usr/bin/clickhouse
1. ? @ 0x126d4f40 in /usr/bin/clickhouse
2. DB::FunctionFactory::getImpl(String const&, std::shared_ptr<DB::Context const>) const @ 0x126d46eb in /usr/bin/clickhouse
3. DB::FunctionFactory::get(String const&, std::shared_ptr<DB::Context const>) const @ 0x126d518a in /usr/bin/clickhouse
4. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x1306cd14 in /usr/bin/clickhouse
5. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x1306d949 in /usr/bin/clickhouse
6. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x1306d949 in /usr/bin/clickhouse
7. ? @ 0x13061355 in /usr/bin/clickhouse
8. DB::ExpressionAnalyzer::getRootActionsNoMakeSet(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::ActionsDAG>&, bool) @ 0x1303ca3d in /usr/bin/clickhouse
9. DB::ExpressionAnalyzer::analyzeAggregation(std::shared_ptr<DB::ActionsDAG>&) @ 0x1303abb7 in /usr/bin/clickhouse
10. DB::ExpressionAnalyzer::ExpressionAnalyzer(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::TreeRewriterResult const> const&, std::shared_ptr<DB::Context const>, unsigned long, bool, bool, std::shared_ptr<DB::PreparedSets>, bool) @ 0x130391b6 in /usr/bin/clickhouse
11. ? @ 0x13a00a33 in /usr/bin/clickhouse
12. ? @ 0x13a093e6 in /usr/bin/clickhouse
13. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x13a040d1 in /usr/bin/clickhouse
14. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::shared_ptr<DB::IAST> const&, std::vector<String, std::allocator<String>> const&) @ 0x13aa42a2 in /usr/bin/clickhouse
15. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x13aa1f13 in /usr/bin/clickhouse
16. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x139bb290 in /usr/bin/clickhouse
17. ? @ 0x13dbf9a2 in /usr/bin/clickhouse
18. ? @ 0x13dc612c in /usr/bin/clickhouse
19. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x14b68d45 in /usr/bin/clickhouse
20. DB::MySQLHandler::run() @ 0x14b65754 in /usr/bin/clickhouse
21. Poco::Net::TCPServerConnection::start() @ 0x17aeb0d4 in /usr/bin/clickhouse
22. Poco::Net::TCPServerDispatcher::run() @ 0x17aec2fb in /usr/bin/clickhouse
23. Poco::PooledThread::run() @ 0x17c6a7a7 in /usr/bin/clickhouse
24. Poco::ThreadImpl::runnableEntry(void*) @ 0x17c681dd in /usr/bin/clickhouse
25. ? @ 0x7f3f3b701609 in ?
26. __clone @ 0x7f3f3b626133 in ?
(version 23.4.1.1157 (official build))
```
**How to reproduce**
* Which ClickHouse server version to use: the latest main branch version will do. I used the head Docker image from the beginning of April: `23.4.1.170`
* Which interface to use, if it matters: MySQL
* `CREATE TABLE` statements for all tables involved: use [cell towers](https://clickhouse.com/docs/en/getting-started/example-datasets/cell-towers) dataset, then query via MySQL protocol
**Describe the solution you'd like**
MySQL's [MAKEDATE](https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_makedate) function is supported.
| https://github.com/ClickHouse/ClickHouse/issues/49143 | https://github.com/ClickHouse/ClickHouse/pull/49603 | 5078231c7e84ee929e25266dde7e2bad29f0bae9 | f4eabd967d8db5e58e51bfa07d609dae6ba03de2 | "2023-04-25T13:22:48Z" | c++ | "2023-05-07T19:51:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,041 | ["docker/docs/builder/Dockerfile", "tests/ci/ci_config.py", "tests/ci/commit_status_helper.py", "tests/ci/docs_check.py", "tests/ci/run_check.py"] | Enforce documentation for `pr-feature` PRs | After some thoughts, we'll just add the pending documentation status. Everything will work as usual, but it will block merging the PR unless the DocsCheck is finished. | https://github.com/ClickHouse/ClickHouse/issues/49041 | https://github.com/ClickHouse/ClickHouse/pull/49090 | 791e91a4c57f52de4aac77590db86181aceb4492 | f6a7e83932e7a32165c945f3904d2dbfcff52d92 | "2023-04-22T10:02:17Z" | c++ | "2023-04-24T12:19:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,039 | ["src/Formats/newLineSegmentationEngine.cpp", "src/Formats/newLineSegmentationEngine.h", "src/Functions/array/arrayAUC.cpp", "src/Processors/Formats/Impl/LineAsStringRowInputFormat.cpp", "src/Processors/Formats/Impl/RegexpRowInputFormat.cpp", "tests/queries/0_stateless/02722_line_as_string_consistency.reference", "tests/queries/0_stateless/02722_line_as_string_consistency.sh"] | Regexp and LineAsString are using the same segmentation engine, but the behavior is different with respect to `\r` characters | null | https://github.com/ClickHouse/ClickHouse/issues/49039 | https://github.com/ClickHouse/ClickHouse/pull/49052 | 76230be7c8a93b7ce8ac28c6b05448cd8243640a | e7087e921770441ba299b7124ed33b564e35edb8 | "2023-04-22T00:43:05Z" | c++ | "2023-04-23T18:25:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,028 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/IO/ReadSettings.h", "src/IO/ReadWriteBufferFromHTTP.cpp", "src/Interpreters/Context.cpp", "tests/ci/stress.py"] | Propose SETTING make_head_request=0 for engine=URL and url() table function | **Use case**
Currently, clickhouse makes two HTTP requests during SELECT data from `engine=URL()` and `url()` table function
first `HEAD` to receive Content-Length header and decide to allow parallel download
second `GET` to retrieve whole content body
For some dynamic HTTP endpoints, generating data twice maybe a heavy operation.
Moreover, custom HTTP servers may not implement HEAD method support and return error.
**Describe the solution you'd like**
I propose to add `SETTING make_head_request=0` for `engine=URL()` and `url()` table function
| https://github.com/ClickHouse/ClickHouse/issues/49028 | https://github.com/ClickHouse/ClickHouse/pull/54602 | 91609a104c267e02a1788372489affc341f6dfe1 | 252cb8a5073c74809d1f3aa80bb4199c752d79e6 | "2023-04-21T17:04:06Z" | c++ | "2023-12-19T12:33:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,012 | ["tests/queries/0_stateless/01346_alter_enum_partition_key_replicated_zookeeper_long.reference", "tests/queries/0_stateless/01346_alter_enum_partition_key_replicated_zookeeper_long.sql"] | Flaky test `01346_alter_enum_partition_key_replicated_zookeeper_long` | Link: https://s3.amazonaws.com/clickhouse-test-reports/0/9d6c3d7a4cc7bdeb8487f2e4212ec33f5e6bf4c7/stateless_tests__release__s3_storage__[1/2].html | https://github.com/ClickHouse/ClickHouse/issues/49012 | https://github.com/ClickHouse/ClickHouse/pull/49099 | ebc7e30fdf08e557aec26bb53b9abcda06b571b2 | 5c80a3dbc2aac4fc39c061602fe8d2233da97b3e | "2023-04-21T12:59:12Z" | c++ | "2023-04-27T11:18:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,997 | ["src/Interpreters/Aggregator.h", "tests/queries/0_stateless/02719_aggregate_with_empty_string_key.reference", "tests/queries/0_stateless/02719_aggregate_with_empty_string_key.sql"] | When key is Nullable(String), the aggregate operator converts the empty string to null | > You have to provide the following information whenever possible.
**Describe what's wrong**

**Does it reproduce on recent release?**
23.3
**Enable crash reporting**
> If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`:
```
<send_crash_reports>
<!-- Changing <enabled> to true allows sending crash reports to -->
<!-- the ClickHouse core developers team via Sentry https://sentry.io -->
<enabled>false</enabled>
```
**How to reproduce**
https://fiddle.clickhouse.com/a4be8131-1332-4f87-b342-853f349f43b2
**Expected behavior**
> A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
> If applicable, add screenshots to help explain your problem.
**Additional context**
> Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/48997 | https://github.com/ClickHouse/ClickHouse/pull/48999 | 9410f20d271042e52bcdf3741d688f250334e41e | 4f6e8b0b3c6e45bac9141cbeee09ed3d551a35d0 | "2023-04-21T06:07:20Z" | c++ | "2023-04-23T13:08:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,936 | ["src/DataTypes/Utils.cpp", "src/DataTypes/Utils.h", "src/Functions/FunctionHelpers.cpp", "src/Interpreters/PreparedSets.cpp", "src/Interpreters/PreparedSets.h", "src/Storages/MergeTree/KeyCondition.cpp", "src/Storages/MergeTree/RPNBuilder.cpp", "src/Storages/MergeTree/RPNBuilder.h", "tests/queries/0_stateless/02882_primary_key_index_in_function_different_types.reference", "tests/queries/0_stateless/02882_primary_key_index_in_function_different_types.sql"] | Analyzer: Subquery filter not using primary key | **Describe the unexpected behaviour**
When using a subquery to filter a table the primary/sorting key is not being used.
**How to reproduce**
This is happening in CH 23.3 but also reproduced in master:
```sql
SELECT version()
ββversion()ββ
β 23.8.1.1 β
βββββββββββββ
```
The minimum reproducible script:
```sql
DROP TABLE IF EXISTS test;
CREATE TABLE test.test
ENGINE = MergeTree
ORDER BY (pk1, pk2)
SETTINGS index_granularity = 8
AS
SELECT
10 pk1,
number pk2,
'test_' || toString(number) str
FROM numbers(20);
EXPLAIN indexes = 1
SELECT *
FROM test.test
WHERE pk1 <= 10 AND (pk2 IN (5))
SETTINGS allow_experimental_analyzer = 1;
EXPLAIN indexes = 1
SELECT *
FROM test.test
WHERE pk1 <= 10 AND (pk2 IN (SELECT 5))
SETTINGS allow_experimental_analyzer = 1;
EXPLAIN indexes = 1
SELECT *
FROM test.test
WHERE pk1 <= 10 AND (pk2 IN (SELECT number FROM numbers(5, 1)))
SETTINGS allow_experimental_analyzer = 1;
```
When filtering by a subquery, the filter is not being used. See both query plans:
```sql
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Expression ((Project names + Projection)) β
β Expression (Change column names to column identifiers) β
β ReadFromMergeTree (test.test) β
β Indexes: β
β PrimaryKey β
β Keys: β
β pk1 β
β pk2 β <= pk2 being used
β Condition: and((pk1 in (-Inf, 10]), (pk2 in 1-element set)) β
β Parts: 1/1 β
β Granules: 1/3 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CreatingSets (Create sets before main query execution) β
β Expression ((Project names + Projection)) β
β Expression (Change column names to column identifiers) β
β ReadFromMergeTree (test.test) β
β Indexes: β
β PrimaryKey β
β Keys: β
β pk1 β <= pk2 is gone
β Condition: (pk1 in (-Inf, 10]) β
β Parts: 1/1 β
β Granules: 3/3 β
β CreatingSet (Create set for subquery) β
β Expression ((Project names + (Projection + Change column names to column identifiers))) β
β ReadFromStorage (SystemOne) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CreatingSets (Create sets before main query execution) β
β Expression ((Project names + Projection)) β
β Expression (Change column names to column identifiers) β
β ReadFromMergeTree (test.test) β
β Indexes: β
β PrimaryKey β
β Keys: β
β pk1 β
β pk2 β
β Condition: and((pk1 in (-Inf, 10]), (pk2 in 1-element set)) β
β Parts: 1/1 β
β Granules: 1/3 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
I have used `SELECT 5` as subquery for simplicity reasons but this reproduces with any kind of subquery.
**Expected behavior**
When using a subquery I'd expect sorting keys to be used also. In real scenarios this is causing reading more data than expected. | https://github.com/ClickHouse/ClickHouse/issues/48936 | https://github.com/ClickHouse/ClickHouse/pull/54544 | 49ee14f70123298c8fcbef7494a30f7ff67b1b89 | 0bc41bab741b474c2788c3690a8fdb29348488a9 | "2023-04-19T10:59:19Z" | c++ | "2023-09-20T10:54:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,933 | ["docs/en/sql-reference/functions/type-conversion-functions.md"] | No type promotion when adding 64/128/256-bit integers | ClickHouse has integer promotion rules where the sum of two integer types is casted into the next bigger type.
```sql
select toTypeName(toInt8(1) + toInt8(1)); -- is Int16
select toTypeName(toInt16(1) + toInt16(1)); -- is Int32
select toTypeName(toInt32(1) + toInt32(1)); -- is Int64
```
These rules only apply to above types but not to these:
```sql
select toTypeName(toInt64(1) + toInt64(1)); -- is Int64
select toTypeName(toInt128(1) + toInt128(1)); -- is Int128
select toTypeName(toInt256(1) + toInt256(1)); -- is Int256
```
(the 256-bit case is in some sense correct as there isn't a bigger type)
Not sure: Is this by design and maybe documented somewhere? Is it worth to change for 64/128 bit integers? | https://github.com/ClickHouse/ClickHouse/issues/48933 | https://github.com/ClickHouse/ClickHouse/pull/48937 | c58785aa67a9f1f540c0ebd89f9afa1fcdbe1748 | b8b9f330d9f4e82cbffa765350e20e7f3efd1f5a | "2023-04-19T09:30:13Z" | c++ | "2023-04-19T13:09:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,914 | ["src/Databases/DatabaseFactory.cpp", "src/Storages/StoragePostgreSQL.cpp", "tests/integration/test_postgresql_database_engine/test.py"] | Unable to set use_table_cache while creating Postgres database with credentials from named collections. | **Describe what's wrong**
Unable to set `use_table_cache=1` while creating Postgres database with credentials from `named collections`.
Getting an error:
```
Code: 36. DB::Exception: Unexpected key use_table_cache in named collection. Required keys: database, db, password, user, username, optional keys: addresses_expr, host, hostname, on_conflict, port, schema. (BAD_ARGUMENTS) (version 23.3.1.2823 (official build))
```
while trying to execute:
```sql
CREATE DATABASE local_postgres ENGINE = PostgreSQL(psql, schema = 'information_schema', use_table_cache = 1);
```
**Does it reproduce on recent release?**
Yes, reproducible starting from `23.3.1.2823`.
**How to reproduce**
* Prepare named collection configuration
```shell
mkdir ./use_table_cache_bug && cd ./use_table_cache_bug
cat >> named_collections.xml << EOF
<?xml version="1.0"?>
<clickhouse>
<named_collections>
<psql>
<host>host.docker.internal</host>
<port>15432</port>
<user>postgres</user>
<password>testtest</password>
<database>postgres</database>
</psql>
</named_collections>
</clickhouse>
EOF
```
* Run a local instance of Postgres
```shell
docker run --name some-postgres -p 15432:5432 -e POSTGRES_PASSWORD=testtest -d postgres
```
* Run a local instance of Clickhouse 23.3.1.2823, with previously created config:
```shell
docker run -d -p 8123:8123 --name some-clickhouse-server --ulimit nofile=262144:262144 -v ${PWD}/named_collections.xml:/etc/clickhouse-server/conf.d/named_collections.xml clickhouse/clickhouse-server:23.3.1.2823
```
* Connect to [local Clickhouse](http://127.0.0.1:8123/play) and try to execute:
```sql
CREATE DATABASE local_postgres ENGINE = PostgreSQL(psql, schema = 'information_schema', use_table_cache = 1);
```
**Expected behavior**
Database with engine PostgreSQL should be successfully created.
**Error message and/or stacktrace**
```
2023.04.18 20:22:51.102642 [ 48 ] {1a44da8b-a019-43c8-b3e2-4c36d84709fe} <Error> DynamicQueryHandler: Code: 36. DB::Exception: Unexpected key use_table_cache in named collection. Required keys: database, db, password, user, username, optional keys: addresses_expr, host, hostname, on_conflict, port, schema. (BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xba7e624 in /usr/bin/clickhouse
1. ? @ 0xf0c6760 in /usr/bin/clickhouse
2. ? @ 0xf0c4198 in /usr/bin/clickhouse
3. DB::StoragePostgreSQL::processNamedCollectionResult(DB::NamedCollection const&, bool) @ 0x1098a030 in /usr/bin/clickhouse
4. DB::DatabaseFactory::getImpl(DB::ASTCreateQuery const&, String const&, std::shared_ptr<DB::Context const>) @ 0x10152e68 in /usr/bin/clickhouse
5. ? @ 0x101524f0 in /usr/bin/clickhouse
6. DB::InterpreterCreateQuery::createDatabase(DB::ASTCreateQuery&) @ 0x101312dc in /usr/bin/clickhouse
7. DB::InterpreterCreateQuery::execute() @ 0x1014add8 in /usr/bin/clickhouse
8. ? @ 0x105f6144 in /usr/bin/clickhouse
9. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::shared_ptr<DB::Context>, std::function<void (DB::QueryResultDetails const&)>, std::optional<DB::FormatSettings> const&) @ 0x105fa5d0 in /usr/bin/clickhouse
10. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::optional<DB::CurrentThread::QueryScope>&) @ 0x1115d66c in /usr/bin/clickhouse
11. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x11160c7c in /usr/bin/clickhouse
12. DB::HTTPServerConnection::run() @ 0x111c09a8 in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x11dfb164 in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x11dfc680 in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x11faf73c in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x11fad004 in /usr/bin/clickhouse
17. ? @ 0x7d5c8 in /usr/lib/aarch64-linux-gnu/libc.so.6
18. ? @ 0xe5d1c in /usr/lib/aarch64-linux-gnu/libc.so.6
(version 23.3.1.2823 (official build))
```
| https://github.com/ClickHouse/ClickHouse/issues/48914 | https://github.com/ClickHouse/ClickHouse/pull/49481 | c87a6e8e7d3fd91fb81bc81049d0232dfa16e319 | 7c203dbcd2948b5d06b306bc5cc5c663e226856f | "2023-04-18T20:37:10Z" | c++ | "2023-05-05T10:53:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,894 | ["tests/queries/0_stateless/02795_full_join_assert_cast.reference", "tests/queries/0_stateless/02795_full_join_assert_cast.sql"] | Assert cast fails in JOIN constant LowCardinality(String) |
```sql
SELECT any(toTypeName(s)) FROM (SELECT ('a' :: String) as s) t1 FULL JOIN (SELECT ('b' :: LowCardinality(String)) as s) t2 USING (s);
```
<details>
<summary>Stack Trace</summary>
```
Stack trace: 0x7fc9dc6aea7c 0x7fc9dc65a476 0x7fc9dc6407f3 0x2269b7b6 0x2269b835 0x2269bc3f 0x19bcc0ca 0x19bcf84c 0x1a07b6e1 0x2b82b3e5 0x2ad0b3e7 0x2ad0ab83 0x2ad0a2ee 0x2ad1c9a8 0x2ad1c958 0x2ad014b9 0x2ad00e9f 0x2ad00cb5 0x2ad009df 0x2ad005f8 0x2b123f60 0x2ce68aec 0x2ce5fc2b 0x2ca49ec3 0x2ca49c00 0x2ca2a381 0x2ca2a697 0x2ca2b4f8 0x2ca2b455 0x2ca2b435 0x2ca2b415 0x2ca2b3e0 0x226f3eb6 0x226f33b5 0x227eaec3 0x227f4764 0x227f4735 0x227f4719 0x227f467d 0x227f4580 0x227f44f5
4. pthread_kill @ 0x7fc9dc6aea7c in ?
5. raise @ 0x7fc9dc65a476 in ?
6. abort @ 0x7fc9dc6407f3 in ?
7. /home/ubuntu/ClickHouse3/src/Common/Exception.cpp:41: DB::abortOnFailedAssertion(String const&) @ 0x2269b7b6 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
8. /home/ubuntu/ClickHouse3/src/Common/Exception.cpp:64: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x2269b835 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
9. /home/ubuntu/ClickHouse3/src/Common/Exception.cpp:92: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x2269bc3f in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
10. /home/ubuntu/ClickHouse3/src/Common/Exception.h:54: DB::Exception::Exception(String&&, int, bool) @ 0x19bcc0ca in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
11. /home/ubuntu/ClickHouse3/src/Common/Exception.h:81: DB::Exception::Exception<String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&) @ 0x19bcf84c in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
12. /home/ubuntu/ClickHouse3/src/Common/assert_cast.h:47: DB::ColumnString const& assert_cast<DB::ColumnString const&, DB::IColumn const&>(DB::IColumn const&) @ 0x1a07b6e1 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
13. /home/ubuntu/ClickHouse3/src/Columns/ColumnString.h:131: DB::ColumnString::insertFrom(DB::IColumn const&, unsigned long) @ 0x2b82b3e5 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
14. /home/ubuntu/ClickHouse3/src/Interpreters/HashJoin.cpp:1789: DB::AdderNonJoined<DB::RowRefList>::add(DB::RowRefList const&, unsigned long&, std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) @ 0x2ad0b3e7 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
15. /home/ubuntu/ClickHouse3/src/Interpreters/HashJoin.cpp:1956: unsigned long DB::NotJoinedHash<false>::fillColumns<(DB::JoinStrictness)3, HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, DB::RowRefList, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>(HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, DB::RowRefList, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>> const&, std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) @ 0x2ad0ab83 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
16. /home/ubuntu/ClickHouse3/src/Interpreters/HashJoin.cpp:1898: unsigned long DB::NotJoinedHash<false>::fillColumnsFromMap<(DB::JoinStrictness)3, DB::HashJoin::MapsTemplate<DB::RowRefList>>(DB::HashJoin::MapsTemplate<DB::RowRefList> const&, std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) @ 0x2ad0a2ee in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
17. /home/ubuntu/ClickHouse3/src/Interpreters/HashJoin.cpp:1829: auto DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&)::'lambda'(auto, auto, auto&)::operator()<std::integral_constant<DB::JoinKind, (DB::JoinKind)3>, std::integral_constant<DB::JoinStrictness, (DB::JoinStrictness)3>, DB::HashJoin::MapsTemplate<DB::RowRefList>>(auto, auto, auto&) const @ 0x2ad1c9a8 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
18. /home/ubuntu/ClickHouse3/src/Interpreters/joinDispatch.h:96: auto bool DB::joinDispatch<std::variant<DB::HashJoin::MapsTemplate<DB::RowRef>, DB::HashJoin::MapsTemplate<DB::RowRefList>, DB::HashJoin::MapsTemplate<std::unique_ptr<DB::SortedLookupVectorBase, std::default_delete<DB::SortedLookupVectorBase>>>>, DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&)::'lambda'(auto, auto, auto&)&>(DB::JoinKind, DB::JoinStrictness, auto&, auto&&)::'lambda'(auto)::operator()<std::integral_constant<int, 14>>(auto) const @ 0x2ad1c958 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
19. /home/ubuntu/ClickHouse3/base/base/../base/constexpr_helpers.h:18: bool func_wrapper<bool DB::joinDispatch<std::variant<DB::HashJoin::MapsTemplate<DB::RowRef>, DB::HashJoin::MapsTemplate<DB::RowRefList>, DB::HashJoin::MapsTemplate<std::unique_ptr<DB::SortedLookupVectorBase, std::default_delete<DB::SortedLookupVectorBase>>>>, DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&)::'lambda'(auto, auto, auto&)&>(DB::JoinKind, DB::JoinStrictness, auto&, auto&&)::'lambda'(auto), std::integral_constant<int, 14>>(auto&&, auto&&) @ 0x2ad014b9 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
20. /home/ubuntu/ClickHouse3/base/base/../base/constexpr_helpers.h:24: bool static_for_impl<int, 0, bool DB::joinDispatch<std::variant<DB::HashJoin::MapsTemplate<DB::RowRef>, DB::HashJoin::MapsTemplate<DB::RowRefList>, DB::HashJoin::MapsTemplate<std::unique_ptr<DB::SortedLookupVectorBase, std::default_delete<DB::SortedLookupVectorBase>>>>, DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&)::'lambda'(auto, auto, auto&)&>(DB::JoinKind, DB::JoinStrictness, auto&, auto&&)::'lambda'(auto), 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23>(auto&&, std::integer_sequence<auto, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23>) @ 0x2ad00e9f in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
21. /home/ubuntu/ClickHouse3/base/base/../base/constexpr_helpers.h:31: bool static_for<0, 24, bool DB::joinDispatch<std::variant<DB::HashJoin::MapsTemplate<DB::RowRef>, DB::HashJoin::MapsTemplate<DB::RowRefList>, DB::HashJoin::MapsTemplate<std::unique_ptr<DB::SortedLookupVectorBase, std::default_delete<DB::SortedLookupVectorBase>>>>, DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&)::'lambda'(auto, auto, auto&)&>(DB::JoinKind, DB::JoinStrictness, auto&, auto&&)::'lambda'(auto)>(auto&&) @ 0x2ad00cb5 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
22. /home/ubuntu/ClickHouse3/src/Interpreters/joinDispatch.h:84: bool DB::joinDispatch<std::variant<DB::HashJoin::MapsTemplate<DB::RowRef>, DB::HashJoin::MapsTemplate<DB::RowRefList>, DB::HashJoin::MapsTemplate<std::unique_ptr<DB::SortedLookupVectorBase, std::default_delete<DB::SortedLookupVectorBase>>>>, DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&)::'lambda'(auto, auto, auto&)&>(DB::JoinKind, DB::JoinStrictness, auto&, auto&&) @ 0x2ad009df in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
23. /home/ubuntu/ClickHouse3/src/Interpreters/HashJoin.cpp:1832: DB::NotJoinedHash<false>::fillColumns(std::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) @ 0x2ad005f8 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
24. /home/ubuntu/ClickHouse3/src/Interpreters/JoinUtils.cpp:864: DB::NotJoinedBlocks::nextImpl() @ 0x2b123f60 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
25. /home/ubuntu/ClickHouse3/src/Interpreters/IJoin.h:111: DB::IBlocksStream::next() @ 0x2ce68aec in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
26. /home/ubuntu/ClickHouse3/src/Processors/Transforms/JoiningTransform.cpp:149: DB::JoiningTransform::work() @ 0x2ce5fc2b in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
27. /home/ubuntu/ClickHouse3/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x2ca49ec3 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
28. /home/ubuntu/ClickHouse3/src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x2ca49c00 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
29. /home/ubuntu/ClickHouse3/src/Processors/Executors/PipelineExecutor.cpp:255: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x2ca2a381 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
30. /home/ubuntu/ClickHouse3/src/Processors/Executors/PipelineExecutor.cpp:221: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x2ca2a697 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
31. /home/ubuntu/ClickHouse3/src/Processors/Executors/PipelineExecutor.cpp:343: DB::PipelineExecutor::spawnThreads()::$_0::operator()() const @ 0x2ca2b4f8 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
32. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) @ 0x2ca2b455 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
33. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) @ 0x2ca2b435 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
34. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>::operator()[abi:v15000]() @ 0x2ca2b415 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
35. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x2ca2b3e0 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
36. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x226f3eb6 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
37. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x226f33b5 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
38. /home/ubuntu/ClickHouse3/src/Common/ThreadPool.cpp:413: ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x227eaec3 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
39. /home/ubuntu/ClickHouse3/src/Common/ThreadPool.cpp:180: void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()::operator()() const @ 0x227f4764 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
40. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<void>()()) std::__invoke[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()&>(void&&) @ 0x227f4735 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
41. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__apply_tuple_impl[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()&, std::tuple<>&>(void&&, std::tuple<>&, std::__tuple_indices<>) @ 0x227f4719 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
42. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()&, std::tuple<>&>(void&&, std::tuple<>&) @ 0x227f467d in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
43. /home/ubuntu/ClickHouse3/src/Common/ThreadPool.h:228: ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()::operator()() @ 0x227f4580 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
44. /home/ubuntu/ClickHouse3/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<void>()()) std::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&>(void&&) @ 0x227f44f5 in /home/ubuntu/ClickHouse3/_build/debug/programs/clickhouse
Integrity check of the executable skipped because the reference checksum could not be read.
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 23.4.1.1)
```
</details>
| https://github.com/ClickHouse/ClickHouse/issues/48894 | https://github.com/ClickHouse/ClickHouse/pull/51307 | 3a697544c9362791e298631a7aaf6d7d6155c1e4 | 4c8a4f54ce9a1ce58ea2a6f4c7b5f0ea804ac110 | "2023-04-18T11:43:31Z" | c++ | "2023-06-23T11:44:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,848 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayCumSum.cpp", "src/Functions/array/arrayCumSumNonNegative.cpp", "src/Functions/array/arrayDifference.cpp", "tests/queries/0_stateless/02716_int256_arrayfunc.reference", "tests/queries/0_stateless/02716_int256_arrayfunc.sql"] | arrayMax, arrayMin, arrayDifference, arrayCumSum doesn't have support for wide ints (Int256) | **Use case**
```
SELECT arrayCumSum([CAST('1', 'Int256'), 2])
Query id: abeebc3d-13d4-4cb5-b69c-f20e8c1b0d51
0 rows in set. Elapsed: 0.007 sec.
Received exception from server (version 23.3.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: arrayCumSum cannot add values of type Int256: While processing arrayCumSum([CAST('1', 'Int256'), 2]). (ILLEGAL_TYPE_OF_ARGUMENT)
```
**Describe the solution you'd like**
Array functions will work for wide int (*Int256, *int128)
Just like https://github.com/ClickHouse/ClickHouse/pull/47594 but for Int256 | https://github.com/ClickHouse/ClickHouse/issues/48848 | https://github.com/ClickHouse/ClickHouse/pull/48866 | 21dddf8c4cdacd4a1f25e27e23e27aec6e58b445 | 6aaafbad1f79e215e49f8917207f0998a760961b | "2023-04-17T12:37:20Z" | c++ | "2023-04-19T08:52:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,837 | ["docs/en/operations/settings/query-complexity.md"] | default value for `max_memory_usage` is not same as in `src/Core/Settings.h` | > max_memory_usage
> The maximum amount of RAM to use for running a query on a single server.
>
> In the default configuration file, the maximum is 10 GB.
whereas, in `src/Core/Settings.h`
https://github.com/ClickHouse/ClickHouse/blob/6a4422c56de7534fb2846a174682d295327918ae/src/Core/Settings.h#L401
| https://github.com/ClickHouse/ClickHouse/issues/48837 | https://github.com/ClickHouse/ClickHouse/pull/48940 | 95b64eccbcd22a838a7b273982a75fae7dcef49b | aec97033c99da95cf5718bf8409ec40e5e8eee95 | "2023-04-17T08:19:37Z" | c++ | "2023-04-20T14:56:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,828 | ["src/Functions/array/arrayAUC.cpp", "src/Functions/array/arrayDotProduct.cpp", "src/Functions/array/arrayScalarProduct.h", "src/Functions/vectorFunctions.cpp", "tests/queries/0_stateless/02415_all_new_functions_must_be_documented.reference", "tests/queries/0_stateless/02708_dot_product.reference", "tests/queries/0_stateless/02708_dot_product.sql"] | `dotProduct` should work for arrays | **Use case**
Vector search databases.
**Implementation**
Currently, it works only for tuples.
```
SELECT dotProduct([1., 2.], [3., 4.])
```
Although it can be easily emulated by the following expression:
```
arraySum((x, y) -> x * y, arr1, arr2)
```
And this expression is quite fast, but there is one downside: if the arguments have Float32 data type, the intermediate expressions, and the result will have Float64 data type.
A dedicated `dotProduct` implementation should keep the calculation within the same data type.
Additionally, it could be further optimized.
| https://github.com/ClickHouse/ClickHouse/issues/48828 | https://github.com/ClickHouse/ClickHouse/pull/49050 | 2c8c412835f1799e410c585ac370ce0ebabf7c7a | 4e3188126f5b013f8a80f369bcfef0d49985615b | "2023-04-17T02:51:20Z" | c++ | "2023-05-20T00:07:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,827 | ["src/Functions/FunctionBinaryArithmetic.h", "src/Functions/bitHammingDistance.cpp", "tests/queries/0_stateless/01017_bithamming_distance.reference", "tests/queries/0_stateless/01017_bithamming_distance.sql"] | Support `bitCount` and `bitHammingDistance` for `String` and `FixedString` | **Use case**
Vector search databases often use conversion of the vector space into bit-vectors, where every bit represents a subspace of the space divided by random hyperplanes. The hamming distance on these bit-vectors represents an approximation of the distance in the original vector space. The typical size of a bit-vector is 4 * N, where N is the original number of dimensions. It can be significant in size, larger than 256-bit, requiring to use of the FixedString data type.
```
SELECT toFixedString('Hello, world!!!!', 16) AS x, toFixedString('Goodbye, xyz!!!!', 16) AS y,
bitAnd(x, y), bitCount(x), bitHammingDistance(x, y)
```
Example:

https://pastila.nl/?06d88bfc/ab814eec85d5bbf8d27496884d95c55d | https://github.com/ClickHouse/ClickHouse/issues/48827 | https://github.com/ClickHouse/ClickHouse/pull/49858 | caee95b89bec933c47d138c621e075bf7aad059d | 4f7bcf01f6349dd673cd4ef634f24884281abaf9 | "2023-04-17T02:42:05Z" | c++ | "2023-05-14T05:28:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,823 | ["src/Processors/QueryPlan/Optimizations/optimizeTree.cpp", "src/Processors/QueryPlan/Optimizations/projectionsCommon.cpp", "src/Processors/QueryPlan/Optimizations/projectionsCommon.h", "tests/queries/0_stateless/01710_normal_projection_with_query_plan_optimization.reference", "tests/queries/0_stateless/01710_normal_projection_with_query_plan_optimization.sql"] | The behavior of filtering by primary key is completely different when "allow_experimental_projection_optimization" is enabled or disabled. | There is a large table with many fields and data. I am performing a simple select query of the form "select count() where id=123", with the table ordered by this id. When I execute this query, magically, it reads 1.17 million rows in approximately 0.3 seconds. However, if I add "allow_experimental_projection_optimization = 0", the query takes only 0.004 seconds to read 41k rows. I did not have this problem with the previous installation (which used version 23.1.1, while the current one uses 23.3.1 and involves migrating data via insert from remote to new physical servers). What should I pay attention to? Where could the problems be?
Please see this link for more information:
https://pastila.nl/?01b84b53/2c765ea83820b70a1218b1ded906cb2b
With "explain", we get the following picture:
https://pastila.nl/?04d0928e/804c59170baf4e3d3f9514fb4f3a4b70
Can you please advise on what to do? The performance of my working cluster has suddenly degraded. | https://github.com/ClickHouse/ClickHouse/issues/48823 | https://github.com/ClickHouse/ClickHouse/pull/52308 | a39ba00ec34bea1b53f062d9f7507b86c48e4d40 | 4b0be1e535e792648d85c6f42e07cf79e6e34156 | "2023-04-16T16:24:35Z" | c++ | "2023-07-20T16:25:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,814 | ["src/Client/ReplxxLineReader.cpp"] | I'd prefer to not have extra padding for queries from history in clickhouse-client | **Describe the issue**
```
milovidov-desktop :) CREATE TABLE TABLEAU_DB.rfph
(
`timestamp` DateTime,
`latitude` Nullable(Float32) CODEC(Gorilla, ZSTD(1)),
`longitude` Nullable(Float32) CODEC(Gorilla, ZSTD(1)),
`xxxx1` LowCardinality(UInt8),
`xxxx2` LowCardinality(Nullable(Int16)),
`xxxx3` LowCardinality(Nullable(Int16)),
`xxxx4` Nullable(Int32),
`xxxx5` LowCardinality(Nullable(Int32)),
`xxxx6` Nullable(Int32),
`xxxx7` Nullable(Int32),
```
Should be:
```
milovidov-desktop :) CREATE TABLE TABLEAU_DB.rfph
(
`timestamp` DateTime,
`latitude` Nullable(Float32) CODEC(Gorilla, ZSTD(1)),
`longitude` Nullable(Float32) CODEC(Gorilla, ZSTD(1)),
`xxxx1` LowCardinality(UInt8),
`xxxx2` LowCardinality(Nullable(Int16)),
`xxxx3` LowCardinality(Nullable(Int16)),
`xxxx4` Nullable(Int32),
`xxxx5` LowCardinality(Nullable(Int32)),
`xxxx6` Nullable(Int32),
`xxxx7` Nullable(Int32),
```
It will make copying queries easier. | https://github.com/ClickHouse/ClickHouse/issues/48814 | https://github.com/ClickHouse/ClickHouse/pull/48870 | 9bc95bed85b9b9c5149d999ada054097f7575a3c | 2b5a1295cf3be0f89827442916d783741a6872fb | "2023-04-15T21:46:24Z" | c++ | "2023-04-19T12:18:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,772 | ["src/Interpreters/ReplaceQueryParameterVisitor.cpp", "tests/queries/0_stateless/02723_param_exception_message_context.reference", "tests/queries/0_stateless/02723_param_exception_message_context.sh"] | If there is exception while parsing a query parameter, add the context, describing what parameter failed to parse. | **Describe the issue**
```
CREATE TABLE sphere_extended
(
`id` UUID,
`url` String,
`domain` String DEFAULT domainWithoutWWW(url),
`url_hash` UInt64 DEFAULT sipHash64(url),
`title` String,
`title_tokens` Array(String) DEFAULT arraySort(arrayDistinct(tokens(lowerUTF8(title)))),
`sha` String EPHEMERAL,
`sha1` FixedString(20) DEFAULT substring(base64Decode(extract(sha, '^sha1:(.+)$')), 1, 20),
`sha1_hex` ALIAS hex(sha1),
`raw` String,
`raw_tokens` Array(String) DEFAULT arraySort(arrayDistinct(tokens(lowerUTF8(raw)))),
`vector` Array(Float32),
`vector16` Array(Float32) DEFAULT arrayMap(x -> reinterpretAsFloat32(bitAnd(reinterpretAsUInt32(x), 4294901760)), vector),
`vector1` Array(Bool) DEFAULT arrayMap((x, c) -> (x > c), vector, {center:Array(String)})
)
ENGINE = MergeTree
PRIMARY KEY url
ORDER BY url
Query id: bb7b72ad-0573-497b-86ff-b3ac16be470d
0 rows in set. Elapsed: 0.020 sec.
Received exception from server (version 23.3.1):
Code: 26. DB::Exception: Received from localhost:9000. DB::ParsingException. DB::ParsingException: Cannot parse quoted string: expected opening quote ''', got '0'. (CANNOT_PARSE_QUOTED_STRING)
``` | https://github.com/ClickHouse/ClickHouse/issues/48772 | https://github.com/ClickHouse/ClickHouse/pull/49061 | f6eefb24408501e9bccd9828603179694336a66c | 76230be7c8a93b7ce8ac28c6b05448cd8243640a | "2023-04-14T06:33:48Z" | c++ | "2023-04-23T18:25:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,770 | ["docs/en/sql-reference/functions/array-functions.md", "src/DataTypes/IDataType.h", "src/Functions/array/arrayJaccardIndex.cpp", "tests/queries/0_stateless/02415_all_new_functions_must_be_documented.reference", "tests/queries/0_stateless/02737_arrayJaccardIndex.reference", "tests/queries/0_stateless/02737_arrayJaccardIndex.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Jaccard Similarity of arrays | **Use case**
Brute-force similarity search.
**Describe the solution you'd like**
See the implementation of the `arrayIntersect` function.
**Describe alternatives you've considered**
```
CREATE FUNCTION arrayJaccardIndex AS (a, b) -> size(arrayIntersect(a, b)) / (size(a) + size(b) - size(arrayIntersect(a, b)))
``` | https://github.com/ClickHouse/ClickHouse/issues/48770 | https://github.com/ClickHouse/ClickHouse/pull/50076 | 036fb1fc9bd2cb92dc0f8cc8969fa677e92ed976 | 9d7737ba093e51d7016d82aecc89be3c8af42024 | "2023-04-14T03:12:26Z" | c++ | "2023-07-17T10:22:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,746 | ["src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.h", "tests/integration/test_zero_copy_replication_drop_detached_part/__init__.py", "tests/integration/test_zero_copy_replication_drop_detached_part/configs/storage_conf.xml", "tests/integration/test_zero_copy_replication_drop_detached_part/test.py"] | Broken detached parts cleanup does not work with zero-copy replication | ```
Initialization failed, table will remain readonly. Error: Code: 233. DB::Exception: Unexpected part name: ignored_all_58400_58417_3 for format version: 1. (BAD_DATA_PART_NAME), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0c66f5 in /usr/bin/clickhouse
1. ? @ 0x1451d6ac in /usr/bin/clickhouse
2. DB::MergeTreePartInfo::fromPartName(String const&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0x1451d183 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::unlockSharedDataByID(String, String const&, String const&, String const&, String const&, std::shared_ptr<DB::ZooKeeperWithFaultInjection> const&, DB::MergeTreeSettings const&, Poco::Logger*, String const&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0x14018e28 in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::removeSharedDetachedPart(std::shared_ptr<DB::IDisk>, String const&, String const&, String const&, String const&, String const&, std::shared_ptr<DB::Context const> const&, std::shared_ptr<zkutil::ZooKeeper> const&) @ 0x1402f87f in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::removeDetachedPart(std::shared_ptr<DB::IDisk>, String const&, String const&) @ 0x1402f017 in /usr/bin/clickhouse
6. DB::MergeTreeData::clearOldBrokenPartsFromDetachedDirectory() @ 0x143c0d4d in /usr/bin/clickhouse
7. DB::ReplicatedMergeTreeAttachThread::runImpl() @ 0x146344fa in /usr/bin/clickhouse
8. DB::ReplicatedMergeTreeAttachThread::run() @ 0x14631b56 in /usr/bin/clickhouse
9. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x12586586 in /usr/bin/clickhouse
10. DB::BackgroundSchedulePool::threadFunction() @ 0x1258960a in /usr/bin/clickhouse
11. ? @ 0x1258a44e in /usr/bin/clickhouse
12. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0xe19a46a in /usr/bin/clickhouse
13. ? @ 0xe19fb21 in /usr/bin/clickhouse
14. ? @ 0x7fe7a20c4b43 in ?
15. ? @ 0x7fe7a2156a00 in ?
```
(this particular issue was worked around by https://github.com/ClickHouse/ClickHouse/pull/48730, but we still need to make the background cleanup work)
| https://github.com/ClickHouse/ClickHouse/issues/48746 | https://github.com/ClickHouse/ClickHouse/pull/48862 | ded8eca0415ea58c70fbefb8acfdb00a158f2669 | 516a0c9784a628cd82418fb9e8217749444a300c | "2023-04-13T12:01:14Z" | c++ | "2023-04-24T10:57:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,736 | ["src/Interpreters/convertFieldToType.cpp", "tests/queries/0_stateless/02714_date_date32_in.reference", "tests/queries/0_stateless/02714_date_date32_in.sql"] | Please add support for the IN operator over combinations of Date and Date32 | Right now it is impossible to compare `Date` and `Date32` via the `IN` operator. This breaks queries when you don't know which one you are going to get.
Example 1 (Date32 in (Date list)):
```
5d7a06876b79 :) select toDate32('2020-01-01') in (toDate('2020-01-01'))
SELECT toDate32('2020-01-01') IN toDate('2020-01-01')
Query id: 5070c5b0-0151-4a29-8655-55c5657b9308
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 22.8.16):
Code: 53. DB::Exception: Received from clickhouse-server:9000. DB::Exception: Type mismatch in IN or VALUES section. Expected: Date32. Got: UInt64: While processing toDate32('2020-01-01') IN toDate('2020-01-01'). (TYPE_MISMATCH)
```
Example 2 (Date in (Date32 list)):
```
5d7a06876b79 :) select toDate('2020-01-01') in (toDate32('2020-01-01'))
SELECT toDate('2020-01-01') IN toDate32('2020-01-01')
Query id: 13fe9e87-7f42-466d-9d29-6c437cbeffd3
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 22.8.16):
Code: 53. DB::Exception: Received from clickhouse-server:9000. DB::Exception: Type mismatch in IN or VALUES section. Expected: Date. Got: Int64: While processing toDate('2020-01-01') IN toDate32('2020-01-01'). (TYPE_MISMATCH)
``` | https://github.com/ClickHouse/ClickHouse/issues/48736 | https://github.com/ClickHouse/ClickHouse/pull/48806 | ce00aa74a05e0c858775c3260477a2bda18e8946 | 527b136ddf9a09d1da0d76a37d3be73f72f690b4 | "2023-04-13T07:50:36Z" | c++ | "2023-04-26T23:17:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,728 | ["src/QueryPipeline/RemoteQueryExecutor.cpp", "tests/queries/0_stateless/01956_skip_unavailable_shards_excessive_attempts.reference", "tests/queries/0_stateless/01956_skip_unavailable_shards_excessive_attempts.sh"] | AGAIN skip_unavailable_shards=1 makes 6 tries if a shard is unavailable | It is happening again.
22.8 and 23.3. are affected.
```
SELECT *
FROM cluster('test_unavailable_shard', system.one)
SETTINGS skip_unavailable_shards = 1, send_logs_level = 'trace'
Query id: d5a5dc35-c7b0-4688-80e9-94423e98ab9b
2023.04.12 19:03:04.669543 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Debug> executeQuery: (from [::1]:45680) select * from cluster('test_unavailable_shard', system.one) settings skip_unavailable_shards=1, send_logs_level='trace' (stage: Complete)
2023.04.12 19:03:04.688746 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> ContextAccess (default): Access granted: REMOTE ON *.*
2023.04.12 19:03:04.689149 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2023.04.12 19:03:04.689184 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2023.04.12 19:03:04.689230 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> InterpreterSelectQuery: Complete -> Complete
2023.04.12 19:03:04.689632 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> Connection (localhost:1): Connecting. Database: (not specified). User: default
ββdummyββ
β 0 β
βββββββββ
2023.04.12 19:03:04.732904 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Warning> HedgedConnectionsFactory: Connection failed at try β1, reason: Code: 210. DB::NetException: Connection refused (localhost:1). (NETWORK_ERROR) (version 23.3.1.2823 (official build))
2023.04.12 19:03:04.732980 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> Connection (localhost:1): Connecting. Database: (not specified). User: default
2023.04.12 19:03:04.775978 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Warning> HedgedConnectionsFactory: Connection failed at try β2, reason: Code: 210. DB::NetException: Connection refused (localhost:1). (NETWORK_ERROR) (version 23.3.1.2823 (official build))
2023.04.12 19:03:04.776054 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> Connection (localhost:1): Connecting. Database: (not specified). User: default
2023.04.12 19:03:04.824794 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Warning> HedgedConnectionsFactory: Connection failed at try β3, reason: Code: 210. DB::NetException: Connection refused (localhost:1). (NETWORK_ERROR) (version 23.3.1.2823 (official build))
2023.04.12 19:03:04.824941 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> Connection (localhost:1): Connecting. Database: (not specified). User: default
2023.04.12 19:03:04.885624 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Warning> HedgedConnectionsFactory: Connection failed at try β1, reason: Code: 210. DB::NetException: Connection refused (localhost:1). (NETWORK_ERROR) (version 23.3.1.2823 (official build))
2023.04.12 19:03:04.885753 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> Connection (localhost:1): Connecting. Database: (not specified). User: default
2023.04.12 19:03:04.944493 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Warning> HedgedConnectionsFactory: Connection failed at try β2, reason: Code: 210. DB::NetException: Connection refused (localhost:1). (NETWORK_ERROR) (version 23.3.1.2823 (official build))
2023.04.12 19:03:04.944582 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Trace> Connection (localhost:1): Connecting. Database: (not specified). User: default
2023.04.12 19:03:04.995784 [ 18373 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Warning> HedgedConnectionsFactory: Connection failed at try β3, reason: Code: 210. DB::NetException: Connection refused (localhost:1). (NETWORK_ERROR) (version 23.3.1.2823 (official build))
2023.04.12 19:03:04.996346 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Debug> executeQuery: Read 1 rows, 1.00 B in 0.326853 sec., 3.059479337806292 rows/sec., 3.06 B/sec.
2023.04.12 19:03:04.996466 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Debug> MemoryTracker: Peak memory usage (for query): 431.31 KiB.
2023.04.12 19:03:04.996477 [ 28138 ] {d5a5dc35-c7b0-4688-80e9-94423e98ab9b} <Debug> TCPHandler: Processed in 0.327207039 sec.
```
The old issue https://github.com/ClickHouse/ClickHouse/issues/26511
The old fix https://github.com/ClickHouse/ClickHouse/pull/26658 | https://github.com/ClickHouse/ClickHouse/issues/48728 | https://github.com/ClickHouse/ClickHouse/pull/48771 | 0c58c7502350c8d8334a8673ea8b130f81b86bf8 | a2793fcea61ed848bf3c9830b8e25c7afaa6337f | "2023-04-12T19:09:14Z" | c++ | "2023-04-15T20:55:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,718 | ["src/Interpreters/PartLog.cpp", "src/Storages/System/StorageSystemPartsBase.cpp", "tests/queries/0_stateless/02117_show_create_table_system.reference"] | Alias `part` and `part_name` for system.part_log and system.parts | ```
DESCRIBE TABLE system.parts
...
name β String
```
```
DESCRIBE TABLE system.part_log
...
part_name β String
```
^
This columns have the same meaning. Let's add an alias `part_name -> name` for `system.parts` and `name -> part_name` for `system.part_log`. | https://github.com/ClickHouse/ClickHouse/issues/48718 | https://github.com/ClickHouse/ClickHouse/pull/48850 | e105a6e9bf4f9deb170f9cbd96f45a293ce66062 | 6805edf9e68a6047770b8ba6440f3ac6583155cc | "2023-04-12T15:16:05Z" | c++ | "2023-04-20T11:04:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,716 | ["docs/en/sql-reference/statements/select/into-outfile.md", "src/Client/ClientBase.cpp", "src/Parsers/ASTQueryWithOutput.h", "src/Parsers/ParserQueryWithOutput.cpp", "tests/queries/0_stateless/02001_append_output_file.reference", "tests/queries/0_stateless/02001_append_output_file.sh"] | Enhance usability of `INTO OUTFILE` clause for SELECT queries | **Use case**
There is a possibility to write `INTO OUTFILE` to export the result of SELECT query, but if a file with a specified name exists on filesystem - there will be an exception.
**Describe the solution you'd like**
Add another keywords to parser to be able to write something like `INTO OUTFILE APPEND 'file.txt'` or `INTO OUTFILE REWRITE 'file.txt'`
**Describe alternatives you've considered**
Probably we can add a user-level setting instead?
| https://github.com/ClickHouse/ClickHouse/issues/48716 | https://github.com/ClickHouse/ClickHouse/pull/48880 | a4f7bfa62de3d27b620d8f3d9b2a809dffb993f1 | 6b2daff663a43d5579750fd427bd4304cb40a282 | "2023-04-12T14:40:49Z" | c++ | "2023-05-10T10:35:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,698 | ["docs/en/sql-reference/table-functions/file.md"] | Need docs for file() function that include PARTITION | See tests dir
```
./performance/file_table_function.xml: INSERT INTO FUNCTION file('test_file_{{_partition_id}}
', '{format}', 'partition_id UInt64, value UInt64')
./performance/file_table_function.xml: INSERT INTO FUNCTION file('test_file_{{_partition_id}}
', '{format}', 'partition_id UInt64, value1 UInt64, value2 UInt64, value3 UInt64, value4 UInt64, val
ue5 UInt64')
./queries/0_stateless/02105_table_function_file_partiotion_by.sh:${CLICKHOUSE_CLIENT} --query="inser
t into table function file('${FILE_PATH}/test_{_partition_id}', 'TSV', 'column1 UInt32, column2 UInt
32, column3 UInt32') PARTITION BY column3 values ${values}";
```` | https://github.com/ClickHouse/ClickHouse/issues/48698 | https://github.com/ClickHouse/ClickHouse/pull/48947 | d43dbc78b6d87dc57eef4b0e9a7c65fb92644047 | 5bcd443bce833e0f613bee5aaa76cebb7e14c953 | "2023-04-12T12:43:33Z" | c++ | "2023-04-20T14:53:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,694 | ["tests/queries/0_stateless/01054_window_view_proc_tumble_to.sh"] | 01054_window_view_proc_tumble_to is flaky | U0VMRUNUIHB1bGxfcmVxdWVzdF9udW1iZXIsIGNoZWNrX3N0YXJ0X3RpbWUsIGNoZWNrX25hbWUsIHRlc3RfbmFtZSwgdGVzdF9zdGF0dXMsIGNoZWNrX3N0YXR1cywgcmVwb3J0X3VybApGUk9NIGNoZWNrcwpXSEVSRSAxCiAgICAtLUFORCBwdWxsX3JlcXVlc3RfbnVtYmVyID0gMAogICAgQU5EIHRlc3Rfc3RhdHVzICE9ICdTS0lQUEVEJwogICAgQU5EIHRlc3Rfc3RhdHVzICE9ICdPSycKICAgIEFORCBjaGVja19zdGF0dXMgIT0gJ3N1Y2Nlc3MnCiAgICBBTkQgdGVzdF9uYW1lIGxpa2UgJyUwMTA1NF93aW5kb3dfdmlld19wcm9jX3R1bWJsZV90byUnCk9SREVSIEJZIGNoZWNrX3N0YXJ0X3RpbWUgZGVzYywgY2hlY2tfbmFtZSwgdGVzdF9uYW1l
https://s3.amazonaws.com/clickhouse-test-reports/0/17aecb797cb863ea02251b9f0daa4a4a12551de2/stateless_tests__asan__[4/4].html | https://github.com/ClickHouse/ClickHouse/issues/48694 | https://github.com/ClickHouse/ClickHouse/pull/48824 | 82fe9d4dbe80d1c5cedb23850ac1b104b0cbc6cf | 0dab82c420df1961f3832fc07cf52efa046a4eb2 | "2023-04-12T11:23:26Z" | c++ | "2023-04-17T03:25:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,691 | ["tests/integration/test_merge_tree_s3/test.py"] | test_store_cleanup_disk_s3 is flaky | https://s3.amazonaws.com/clickhouse-test-reports/0/c3482a104f73f237450f7d7781871859472687d9/integration_tests__asan__[6/6].html
https://play.clickhouse.com/play?user=play#U0VMRUNUIHB1bGxfcmVxdWVzdF9udW1iZXIsIGNoZWNrX3N0YXJ0X3RpbWUsIGNoZWNrX25hbWUsIHRlc3RfbmFtZSwgdGVzdF9zdGF0dXMsIGNoZWNrX3N0YXR1cywgcmVwb3J0X3VybApGUk9NIGNoZWNrcwpXSEVSRSAxCiAgICAtLUFORCBwdWxsX3JlcXVlc3RfbnVtYmVyID0gMAogICAgQU5EIHRlc3Rfc3RhdHVzICE9ICdTS0lQUEVEJwogICAgQU5EIHRlc3Rfc3RhdHVzICE9ICdPSycKICAgIEFORCBjaGVja19zdGF0dXMgIT0gJ3N1Y2Nlc3MnCiAgICBBTkQgdGVzdF9uYW1lIGxpa2UgJyV0ZXN0X3N0b3JlX2NsZWFudXBfZGlza19zMyUnCk9SREVSIEJZIGNoZWNrX3N0YXJ0X3RpbWUgZGVzYywgY2hlY2tfbmFtZSwgdGVzdF9uYW1l | https://github.com/ClickHouse/ClickHouse/issues/48691 | https://github.com/ClickHouse/ClickHouse/pull/50558 | fb4e950f9f6545bcfae057ef4e497aa6cd3f2027 | afdf7eaed7fab6d5f25663d66be97c301acc5195 | "2023-04-12T11:15:40Z" | c++ | "2023-06-05T08:14:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,688 | ["tests/queries/0_stateless/02455_one_row_from_csv_memory_usage.sh"] | Flaky test `02455_one_row_from_csv_memory_usage` | Link: https://s3.amazonaws.com/clickhouse-test-reports/48666/165edd332e58c280808a842331d9f6a9934dfa59/stateless_tests__ubsan__[2/2].html | https://github.com/ClickHouse/ClickHouse/issues/48688 | https://github.com/ClickHouse/ClickHouse/pull/48756 | 1996049256b559504b287bc251085224423dedea | eafa3e8f643281537848734508369de709e18f00 | "2023-04-12T10:37:52Z" | c++ | "2023-04-15T10:50:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,682 | ["src/Processors/QueryPlan/Optimizations/optimizeReadInOrder.cpp", "tests/queries/0_stateless/02811_read_in_order_and_array_join_bug.reference", "tests/queries/0_stateless/02811_read_in_order_and_array_join_bug.sql"] | Cannot find column less(id, 100) in ActionsDAG result. (UNKNOWN_IDENTIFIER) | Good afternoon, I ran into such a problem that I'm trying to select from a view and I'm catching an error. I found how to fix the error - turn off the optimization "setting optimize_read_in_order" , but in my opinion this setting should not throw an exception.
version Clickhouse 23.3.1.2823-lts
example:
https://fiddle.clickhouse.com/66ae12fb-11ee-4684-b07f-531119686419
or
```
drop table if exists default.test_array_joins
;
create table default.test_array_joins
(
id UInt64 default rowNumberInAllBlocks() + 1,
arr_1 Array(String),
arr_2 Array(String),
arr_3 Array(String),
arr_4 Array(String)
) engine = MergeTree order by id
;
insert into default.test_array_joins (id,arr_1, arr_2, arr_3, arr_4)
SELECT number,array(randomPrintableASCII(3)),array(randomPrintableASCII(3)),array(randomPrintableASCII(3)),array(randomPrintableASCII(3))
from numbers(1000)
;
create or replace view default.v4test_array_joins as SELECT * from default.test_array_joins where id != 10
;
select * from default.v4test_array_joins
array join columns('^arr')
where match(arr_4,'a')
and id < 100
ORDER by id limit 1 by arr_1 settings optimize_read_in_order = 1
```
Another interesting case is if in the where clause add a known true condition first. It's work without errors!)
```
select * from default.v4test_array_joins
array join columns('^arr')
where true and match(arr_4,'a')
and id < 100
ORDER by id limit 1 by arr_1 settings optimize_read_in_order = 1
```
| https://github.com/ClickHouse/ClickHouse/issues/48682 | https://github.com/ClickHouse/ClickHouse/pull/51746 | 0523e6cbd0b0793572d7c3f36347190dcc016f7e | f748f1242678dca1fc789fbcdfc61969f1f73112 | "2023-04-12T09:07:04Z" | c++ | "2023-10-31T10:51:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,676 | ["tests/queries/0_stateless/02796_projection_date_filter_on_view.reference", "tests/queries/0_stateless/02796_projection_date_filter_on_view.sql"] | SELECT with date filter on view using projection causes segmentation fault | I get a segmentation fault when I `SELECT` from a `VIEW`, which `SELECT`s from a table with the corresponding `PROJECTION`. It only happens if I include a `BETWEEN` filter on the `DateTime64(3, 'UTC')` column, which is part of the `GROUP BY` in the projection.
Here some code for reproduction, which is self-contained, and should yield an error--it ultimately crashes clickhouse:
```sql
create database test;
-- create source table
CREATE TABLE test.fx_1m (
`symbol` LowCardinality(String) CODEC(ZSTD),
`dt_close` DateTime64(3, 'UTC') CODEC(DoubleDelta, ZSTD),
`open` Float32 CODEC(Delta, ZSTD),
`high` Float32 CODEC(Delta, ZSTD),
`low` Float32 CODEC(Delta, ZSTD),
`close` Float32 CODEC(Delta, ZSTD),
`volume` Float32 CODEC(Delta, ZSTD)
)
ENGINE = MergeTree()
PARTITION BY toYear(dt_close)
ORDER BY (symbol, dt_close);
-- add projection
ALTER TABLE test.fx_1m
ADD PROJECTION fx_5m (
SELECT
symbol,
toStartOfInterval(dt_close, INTERVAL 300 SECOND) AS dt_close,
argMin(open, dt_close),
max(high),
min(low),
argMax(close, dt_close),
sum(volume) volume
GROUP BY symbol, dt_close
);
-- materialize projection
ALTER TABLE test.fx_1m MATERIALIZE PROJECTION fx_5m SETTINGS mutations_sync = 2;
-- create view using projection
CREATE VIEW test.fx_5m AS
SELECT
symbol,
toStartOfInterval(dt_close, INTERVAL 300 SECOND) AS dt_close,
argMin(open, dt_close) open,
max(high) high,
min(low) low,
argMax(close, dt_close) close,
sum(volume) volume
FROM test.fx_1m
GROUP BY symbol, dt_close;
-- insert sample data
INSERT INTO test.fx_1m
SELECT
'EURUSD',
toDateTime64('2022-12-12 12:00:00', 3, 'UTC') + number,
number + randCanonical(),
number + randCanonical(),
number + randCanonical(),
number + randCanonical(),
number + randCanonical()
FROM numbers(1000000);
-- ------------------------------------------------------------------------
-- test cases
-- ------------------------------------------------------------------------
-- !!! segmentation fault (filter on dt_close column)
SELECT
dt_close,
close
FROM test.fx_5m
where symbol = 'EURUSD' and dt_close between '2022-12-11' and '2022-12-13'
order by dt_close;
```
Clickhouse Fiddle for reproduction: [https://fiddle.clickhouse.com/143d0605-3296-4ed0-a324-a7372e4b098c](https://fiddle.clickhouse.com/143d0605-3296-4ed0-a324-a7372e4b098c)
**Affected ClickHouse versions**
It happens on all **23.3+** versions.
**Expected behavior**
Return a result-set according to `SELECT` queries. I want to provide `VIEW`s which correspond to the projection queries so that queries using that projection are easier to communicate/read, i.e. look like normal tables.
**Error message and/or stacktrace**
```sh
2023.04.12 00:05:12.354688 [ 242503 ] {} <Fatal> BaseDaemon: ########################################
2023.04.12 00:05:12.354724 [ 242503 ] {} <Fatal> BaseDaemon: (version 23.3.1.2823 (official build), build id: 3D3A9C876D2B37B07E5A6F5020720D717C92D0EB) (from thread 242336) (query_id: 6f771062-92e0-481a-addb-d67c1061de15) (query:
SELECT
dt_close,
close
FROM test.fx_5m
where symbol = 'EURUSD' and dt_close between '2022-12-15' and '2022-12-24'
order by dt_close
LIMIT 0, 1000) Received signal Segmentation fault (11)
2023.04.12 00:05:12.354731 [ 242503 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.04.12 00:05:12.354740 [ 242503 ] {} <Fatal> BaseDaemon: Stack trace: 0x145cdaf4 0x145c0d08 0x145c1a91 0x145beab8 0x14d13097 0x14933206 0x1494ce0a 0x14941e7b 0x149441bc 0xe25cc73 0xe2628e1 0x7f7fb4a94b43 0x7f7fb4b26a00
2023.04.12 00:05:12.354765 [ 242503 ] {} <Fatal> BaseDaemon: 2. DB::MergeTreeRangeReader::read(unsigned long, DB::MarkRanges&) @ 0x145cdaf4 in /usr/bin/clickhouse
2023.04.12 00:05:12.354775 [ 242503 ] {} <Fatal> BaseDaemon: 3. DB::IMergeTreeSelectAlgorithm::readFromPartImpl() @ 0x145c0d08 in /usr/bin/clickhouse
2023.04.12 00:05:12.354778 [ 242503 ] {} <Fatal> BaseDaemon: 4. DB::IMergeTreeSelectAlgorithm::readFromPart() @ 0x145c1a91 in /usr/bin/clickhouse
2023.04.12 00:05:12.354782 [ 242503 ] {} <Fatal> BaseDaemon: 5. DB::IMergeTreeSelectAlgorithm::read() @ 0x145beab8 in /usr/bin/clickhouse
2023.04.12 00:05:12.354786 [ 242503 ] {} <Fatal> BaseDaemon: 6. DB::MergeTreeSource::tryGenerate() @ 0x14d13097 in /usr/bin/clickhouse
2023.04.12 00:05:12.354791 [ 242503 ] {} <Fatal> BaseDaemon: 7. DB::ISource::work() @ 0x14933206 in /usr/bin/clickhouse
2023.04.12 00:05:12.354794 [ 242503 ] {} <Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x1494ce0a in /usr/bin/clickhouse
2023.04.12 00:05:12.354800 [ 242503 ] {} <Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x14941e7b in /usr/bin/clickhouse
2023.04.12 00:05:12.354803 [ 242503 ] {} <Fatal> BaseDaemon: 10. ? @ 0x149441bc in /usr/bin/clickhouse
2023.04.12 00:05:12.354810 [ 242503 ] {} <Fatal> BaseDaemon: 11. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0xe25cc73 in /usr/bin/clickhouse
2023.04.12 00:05:12.354814 [ 242503 ] {} <Fatal> BaseDaemon: 12. ? @ 0xe2628e1 in /usr/bin/clickhouse
2023.04.12 00:05:12.354818 [ 242503 ] {} <Fatal> BaseDaemon: 13. ? @ 0x7f7fb4a94b43 in ?
2023.04.12 00:05:12.354822 [ 242503 ] {} <Fatal> BaseDaemon: 14. ? @ 0x7f7fb4b26a00 in ?
2023.04.12 00:05:12.453559 [ 242503 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: E5CD36BDC4D2AE0CA714E60BAD060B4D)
2023.04.12 00:05:13.453366 [ 241920 ] {} <Fatal> Application: Child process was terminated by signal 11.
```
| https://github.com/ClickHouse/ClickHouse/issues/48676 | https://github.com/ClickHouse/ClickHouse/pull/51308 | 75ef844f99a3af081978a7d905dcf2c214508237 | e1b0991de226f795886d2ed13e27ca94cb3a7641 | "2023-04-12T00:47:28Z" | c++ | "2023-06-23T12:02:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,640 | ["docs/en/sql-reference/functions/rounding-functions.md"] | roundAge returns incorrect value | Function **roundAge**, describe from [documentation](https://clickhouse.com/docs/en/sql-reference/functions/rounding-functions#roundagenum):
> If the number is less than 18, it returns 0. Otherwise, it rounds the number down to a number from the set: 18, 25, 35, 45, 55.
But for 1-17 it returns 17.
```
SELECT
number,
roundAge(number)
FROM
(
SELECT number
FROM system.numbers
LIMIT 20
)
ββnumberββ¬βroundAge(number)ββ
β 0 β 0 β
β 1 β 17 β
β 2 β 17 β
β 3 β 17 β
β 4 β 17 β
β 5 β 17 β
β 6 β 17 β
β 7 β 17 β
β 8 β 17 β
β 9 β 17 β
β 10 β 17 β
β 11 β 17 β
β 12 β 17 β
β 13 β 17 β
β 14 β 17 β
β 15 β 17 β
β 16 β 17 β
β 17 β 17 β
β 18 β 18 β
β 19 β 18 β
ββββββββββ΄βββββββββββββββββββ
```
Repordused at least on versions 22.8, 23.2, 23.3 | https://github.com/ClickHouse/ClickHouse/issues/48640 | https://github.com/ClickHouse/ClickHouse/pull/48673 | dd64eaed668a35f083d6690023d1bf46a988a68d | f619208b862104577530b9eb1202e8fd8c87216c | "2023-04-11T11:41:21Z" | c++ | "2023-04-12T08:32:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,580 | ["docs/en/sql-reference/statements/show.md", "src/Analyzer/Passes/QueryAnalysisPass.cpp", "src/Parsers/ParserTablePropertiesQuery.cpp", "tests/queries/0_stateless/02710_show_table.reference", "tests/queries/0_stateless/02710_show_table.sql"] | Support `SHOW TABLE` syntax meaning the same as `SHOW CREATE TABLE` | **Use case**
Read https://staging.clickhouse.com/blog/redshift-vs-clickhouse-comparison
**Describe the solution you'd like**
Trivial. | https://github.com/ClickHouse/ClickHouse/issues/48580 | https://github.com/ClickHouse/ClickHouse/pull/48591 | 567111f146a05b498f91328acbebbb16210099f5 | 8eabc4306883f527f9713aa264139e21412fa2ba | "2023-04-10T03:05:01Z" | c++ | "2023-04-14T16:42:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,545 | ["src/Interpreters/AsynchronousInsertQueue.cpp", "tests/queries/0_stateless/02714_async_inserts_empty_data.reference", "tests/queries/0_stateless/02714_async_inserts_empty_data.sh"] | Unexpected exception while inserting empty data with async_insert=1 |
**Unexpected behaviour**
Insertion of the empty data set (without records) causes an unexpected and unclear error when async_insert is enabled, error: `The associated promise has been destructed prior to the associated state becoming ready`.
See the stack trace below. The same error will be visible on the client side.
**How to reproduce**
* ClickHouse starting from 22.12 (22.12.6.22)
* HTTP interface (curl, clickhouse/[email protected])
* async_insert should be enabled
* Table for demonstration:
```
create table default.async_insert_issue (
key String
, data UInt32
)
engine = MergeTree() order by (key);
```
* Curl command for demonstration:
```
echo '' | curl 'http://localhost:8123?query=insert%20into%20default.async_insert_issue%20settings%20async_insert%3D1%20format%20JSONEachRow' --data-binary @-
```
**Expected behavior**
Insertion requests without data should be silently ignored, or the user should be warned that no data was provided.
Should be no unexpected errors.
**Error message and/or stacktrace**
```
2023.04.07 10:34:54.670440 [ 285 ] {3f869f30-6ef5-4430-8eff-4b692f6c97e8} <Error> DynamicQueryHandler: std::exception. Code: 1001, type: std::__1::future_error, e.what() = The associated promise has been destructed prior to the associated state becoming ready., Stack trace (when copying this message, always include the lines below):
0. std::__1::promise<void>::~promise() @ 0x13b0a7ac in /usr/bin/clickhouse
1. ? @ 0xfa01778 in /usr/bin/clickhouse
2. ? @ 0xfa00e84 in /usr/bin/clickhouse
3. DB::AsynchronousInsertQueue::processData(DB::AsynchronousInsertQueue::InsertQuery, std::__1::unique_ptr<DB::AsynchronousInsertQueue::InsertData, std::__1::default_delete<DB::AsynchronousInsertQueue::InsertData>>, std::__1::shared_ptr<DB::Context const>) @ 0xf9feff4 in /usr/bin/clickhouse
4. ? @ 0xfa01098 in /usr/bin/clickhouse
5. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__1::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0xbabb434 in /usr/bin/clickhouse
6. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) @ 0xbabdafc in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbab7bf0 in /usr/bin/clickhouse
8. ? @ 0xbabcae8 in /usr/bin/clickhouse
9. start_thread @ 0x7624 in /usr/lib/aarch64-linux-gnu/libpthread-2.31.so
10. ? @ 0xd149c in /usr/lib/aarch64-linux-gnu/libc-2.31.so
(version 23.2.5.46 (official build))
```
PS: Actually not a big deal, but logs are flooded with those errors.
| https://github.com/ClickHouse/ClickHouse/issues/48545 | https://github.com/ClickHouse/ClickHouse/pull/48663 | a8c892f92567ebb9bcacf08d0e27579b672117f7 | 4a1c868ca9c74ca2fd462e7e119479c1fea50997 | "2023-04-07T15:11:34Z" | c++ | "2023-04-13T17:44:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,514 | ["src/Interpreters/getColumnFromBlock.cpp", "tests/queries/0_stateless/02709_storage_memory_compressed.reference", "tests/queries/0_stateless/02709_storage_memory_compressed.sql"] | segmentation fault in Memory engine with compression | see https://fiddle.clickhouse.com/865783e7-90a1-4bee-afa2-244ccd363ae4
```sql
CREATE TABLE users (uid Int16, name String, age Int16) ENGINE=Memory SETTINGS compress=1;
INSERT INTO users VALUES (1231, 'John', 33);
INSERT INTO users VALUES (6666, 'Ksenia', 48);
INSERT INTO users VALUES (8888, 'Alice', 50);
SELECT * FROM users;
Received exception from server (version 23.3.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Bad cast from type DB::ColumnCompressed to DB::ColumnVector<short>. (LOGICAL_ERROR)
(query: SELECT * FROM users;)
```
also segment fault (custom build):
```sql
CREATE TABLE default.hits_memory AS datasets.hits_v1 Engine=Memory SETTINGS compress=1;
INSERT INTO default.hits_memory SELECT * FROM datasets.hits_v1 LIMIT 100000; -- dataset hits_v1 is populate before hand
SELECT * FROM default.hits_memory LIMIT 10
SELECT *
FROM default.hits_memory
LIMIT 10
Query id: b8db6b14-78bd-44aa-8a60-915900959c70
2023.04.07 01:24:42.947757 [ 341 ] <Fatal> BaseDaemon: ########################################
2023.04.07 01:24:42.947960 [ 341 ] <Fatal> BaseDaemon: (version 23.2.4.7, build id: 8EC24C8B6DF9DDD4DFEBCF86CA67A3E429748E2B) (from thread 337) (query_id: b8db6b14-78bd-44aa-8a60-915900959c70) (query: SELECT * FROM default.hits_memory LIMIT 10) Received signal Segmentation fault (11)
2023.04.07 01:24:42.948067 [ 341 ] <Fatal> BaseDaemon: Address: 0x44b85 Access: write. Address not mapped to object.
2023.04.07 01:24:42.948175 [ 341 ] <Fatal> BaseDaemon: Stack trace: 0x124dcbaf 0x124d6441 0x1351d725 0x139bb161 0x142f3ad5 0x142f3646 0x1430cf6c 0x1430217b 0x143012d9 0x14301020 0x14310c9f 0xdcd3b4a 0xdcd9201 0x7f4aa859b609 0x7f4aa84c0133
2023.04.07 01:24:42.948295 [ 341 ] <Fatal> BaseDaemon: 2. DB::SerializationArray::enumerateStreams(DB::ISerialization::EnumerateStreamsSettings&, std::__1::function<void (DB::ISerialization::SubstreamPath const&)> const&, DB::ISerialization::SubstreamData const&) const @ 0x124dcbaf in /usr/bin/clickhouse
2023.04.07 01:24:42.948406 [ 341 ] <Fatal> BaseDaemon: 3. DB::ISerialization::enumerateStreams(std::__1::function<void (DB::ISerialization::SubstreamPath const&)> const&, std::__1::shared_ptr<DB::IDataType const> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&) const @ 0x124d6441 in /usr/bin/clickhouse
2023.04.07 01:24:42.948474 [ 341 ] <Fatal> BaseDaemon: 4. DB::fillMissingColumns(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&, unsigned long, DB::NamesAndTypesList const&, DB::NamesAndTypesList const&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>) @ 0x1351d725 in /usr/bin/clickhouse
2023.04.07 01:24:42.948568 [ 341 ] <Fatal> BaseDaemon: 5. ? @ 0x139bb161 in /usr/bin/clickhouse
2023.04.07 01:24:42.948615 [ 341 ] <Fatal> BaseDaemon: 6. DB::ISource::tryGenerate() @ 0x142f3ad5 in /usr/bin/clickhouse
2023.04.07 01:24:42.948651 [ 341 ] <Fatal> BaseDaemon: 7. DB::ISource::work() @ 0x142f3646 in /usr/bin/clickhouse
2023.04.07 01:24:42.948689 [ 341 ] <Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x1430cf6c in /usr/bin/clickhouse
2023.04.07 01:24:42.948741 [ 341 ] <Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1430217b in /usr/bin/clickhouse
2023.04.07 01:24:42.948809 [ 341 ] <Fatal> BaseDaemon: 10. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x143012d9 in /usr/bin/clickhouse
2023.04.07 01:24:42.948870 [ 341 ] <Fatal> BaseDaemon: 11. DB::PipelineExecutor::execute(unsigned long) @ 0x14301020 in /usr/bin/clickhouse
2023.04.07 01:24:42.948908 [ 341 ] <Fatal> BaseDaemon: 12. ? @ 0x14310c9f in /usr/bin/clickhouse
2023.04.07 01:24:42.948952 [ 341 ] <Fatal> BaseDaemon: 13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xdcd3b4a in /usr/bin/clickhouse
2023.04.07 01:24:42.948990 [ 341 ] <Fatal> BaseDaemon: 14. ? @ 0xdcd9201 in /usr/bin/clickhouse
2023.04.07 01:24:42.949022 [ 341 ] <Fatal> BaseDaemon: 15. ? @ 0x7f4aa859b609 in ?
2023.04.07 01:24:42.949050 [ 341 ] <Fatal> BaseDaemon: 16. __clone @ 0x7f4aa84c0133 in ?
2023.04.07 01:24:42.949088 [ 341 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
-- server host name stripped from log
```
| https://github.com/ClickHouse/ClickHouse/issues/48514 | https://github.com/ClickHouse/ClickHouse/pull/48517 | abe3c9b9db78c1bad5582807ab281cee4eef8536 | 5f01b8a2b59f6f340c65bfbc4ed0dd655d997ff9 | "2023-04-06T17:36:55Z" | c++ | "2023-04-13T17:52:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,496 | ["src/Planner/Planner.cpp", "tests/queries/0_stateless/02932_parallel_replicas_fuzzer.reference", "tests/queries/0_stateless/02932_parallel_replicas_fuzzer.sql"] | Logical error: 'Chunk info was not set for chunk in GroupingAggregatedTransform.' | **Describe the bug**
https://s3.amazonaws.com/clickhouse-test-reports/0/3ad0a6ac1861292ef3e74d2738a7c67f44b0932d/fuzzer_astfuzzermsan/report.html
```
2023.04.06 04:03:52.021197 [ 495 ] {f2a512bd-c6c5-455b-bb60-53bba1c5f6d8} <Fatal> : Logical error: 'Chunk info was not set for chunk in GroupingAggregatedTransform.'.
2023.04.06 04:03:52.022113 [ 502 ] {} <Fatal> BaseDaemon: ########################################
2023.04.06 04:03:52.022301 [ 502 ] {} <Fatal> BaseDaemon: (version 23.4.1.1 (official build), build id: D27FC0279FD8E768868AC5CAC2AD96D9EF2EA873) (from thread 495) (query_id: f2a512bd-c6c5-455b-bb60-53bba1c5f6d8) (query: SELECT NULL FROM t_02709__fuzz_23 FINAL GROUP BY sign, '1023' ORDER BY nan DESC, [0, NULL, NULL, NULL, NULL] DESC SETTINGS max_parallel_replicas = 3, allow_experimental_parallel_reading_from_replicas = 1, use_hedged_requests = 0, cluster_for_parallel_replicas = 'parallel_replicas') Received signal Aborted (6)
2023.04.06 04:03:52.022512 [ 502 ] {} <Fatal> BaseDaemon:
2023.04.06 04:03:52.022697 [ 502 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f96aa57200b 0x7f96aa551859 0x2994b2b1 0x2994c5ff 0x1670c3ba 0x4d3617bc 0x4d35f33c 0x4d364da3 0x2a074ebc 0x4c9236d0 0x4c909cc0 0x4c90f8ce 0x29cd4bf2 0x29ce5d23 0x7f96aa729609 0x7f96aa64e133
2023.04.06 04:03:52.022896 [ 502 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7f96aa57200b in ?
2023.04.06 04:03:52.023056 [ 502 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7f96aa551859 in ?
2023.04.06 04:03:52.351159 [ 502 ] {} <Fatal> BaseDaemon: 6. ./build_docker/./src/Common/Exception.cpp:0: DB::abortOnFailedAssertion(String const&) @ 0x2994b2b1 in /workspace/clickhouse
2023.04.06 04:03:52.667905 [ 502 ] {} <Fatal> BaseDaemon: 7.1. inlined from ./build_docker/./src/Common/Exception.cpp:0: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&)
2023.04.06 04:03:52.668116 [ 502 ] {} <Fatal> BaseDaemon: 7. ./build_docker/./src/Common/Exception.cpp:92: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x2994c5ff in /workspace/clickhouse
2023.04.06 04:04:01.382279 [ 502 ] {} <Fatal> BaseDaemon: 8. DB::Exception::Exception<char const (&) [65], void>(int, char const (&) [65]) @ 0x1670c3ba in /workspace/clickhouse
2023.04.06 04:04:01.696671 [ 502 ] {} <Fatal> BaseDaemon: 9. ./build_docker/./src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:256: DB::GroupingAggregatedTransform::addChunk(DB::Chunk, unsigned long) @ 0x4d3617bc in /workspace/clickhouse
2023.04.06 04:04:02.003459 [ 502 ] {} <Fatal> BaseDaemon: 10. ./build_docker/./src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:50: DB::GroupingAggregatedTransform::readFromAllInputs() @ 0x4d35f33c in /workspace/clickhouse
2023.04.06 04:04:02.341257 [ 502 ] {} <Fatal> BaseDaemon: 11. ./build_docker/./src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:141: DB::GroupingAggregatedTransform::prepare() @ 0x4d364da3 in /workspace/clickhouse
2023.04.06 04:04:03.821105 [ 502 ] {} <Fatal> BaseDaemon: 12. ./build_docker/./src/Processors/IProcessor.h:193: DB::IProcessor::prepare(std::vector<unsigned long, std::allocator<unsigned long>> const&, std::vector<unsigned long, std::allocator<unsigned long>> const&) @ 0x2a074ebc in /workspace/clickhouse
2023.04.06 04:04:04.045051 [ 502 ] {} <Fatal> BaseDaemon: 13. ./build_docker/./src/Processors/Executors/ExecutingGraph.cpp:0: DB::ExecutingGraph::updateNode(unsigned long, std::queue<DB::ExecutingGraph::Node*, std::deque<DB::ExecutingGraph::Node*, std::allocator<DB::ExecutingGraph::Node*>>>&, std::queue<DB::ExecutingGraph::Node*, std::deque<DB::ExecutingGraph::Node*, std::allocator<DB::ExecutingGraph::Node*>>>&) @ 0x4c9236d0 in /workspace/clickhouse
2023.04.06 04:04:04.283519 [ 502 ] {} <Fatal> BaseDaemon: 14. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:0: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x4c909cc0 in /workspace/clickhouse
2023.04.06 04:04:04.553010 [ 502 ] {} <Fatal> BaseDaemon: 15.1. inlined from ./build_docker/./base/base/../base/scope_guard.h:48: ~BasicScopeGuard
2023.04.06 04:04:04.553147 [ 502 ] {} <Fatal> BaseDaemon: 15.2. inlined from ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:345: operator()
2023.04.06 04:04:04.553325 [ 502 ] {} <Fatal> BaseDaemon: 15.3. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&)
2023.04.06 04:04:04.553448 [ 502 ] {} <Fatal> BaseDaemon: 15.4. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&, std::__tuple_indices<>)
2023.04.06 04:04:04.553549 [ 502 ] {} <Fatal> BaseDaemon: 15.5. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&)
2023.04.06 04:04:04.553619 [ 502 ] {} <Fatal> BaseDaemon: 15.6. inlined from ./build_docker/./src/Common/ThreadPool.h:227: operator()
2023.04.06 04:04:04.553764 [ 502 ] {} <Fatal> BaseDaemon: 15.7. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&)
2023.04.06 04:04:04.553931 [ 502 ] {} <Fatal> BaseDaemon: 15.8. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479: void std::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&)
2023.04.06 04:04:04.554093 [ 502 ] {} <Fatal> BaseDaemon: 15.9. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]()
2023.04.06 04:04:04.554158 [ 502 ] {} <Fatal> BaseDaemon: 15. ./build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x4c90f8ce in /workspace/clickhouse
2023.04.06 04:04:04.739078 [ 502 ] {} <Fatal> BaseDaemon: 16.1. inlined from ./build_docker/./base/base/../base/wide_integer_impl.h:789: bool wide::integer<128ul, unsigned int>::_impl::operator_eq<wide::integer<128ul, unsigned int>>(wide::integer<128ul, unsigned int> const&, wide::integer<128ul, unsigned int> const&)
2023.04.06 04:04:04.739235 [ 502 ] {} <Fatal> BaseDaemon: 16.2. inlined from ./build_docker/./base/base/../base/wide_integer_impl.h:1456: bool wide::operator==<128ul, unsigned int, 128ul, unsigned int>(wide::integer<128ul, unsigned int> const&, wide::integer<128ul, unsigned int> const&)
2023.04.06 04:04:04.739337 [ 502 ] {} <Fatal> BaseDaemon: 16.3. inlined from ./build_docker/./base/base/../base/strong_typedef.h:42: StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag>::operator==(StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag> const&) const
2023.04.06 04:04:04.739413 [ 502 ] {} <Fatal> BaseDaemon: 16.4. inlined from ./build_docker/./src/Common/OpenTelemetryTraceContext.h:65: DB::OpenTelemetry::Span::isTraceEnabled() const
2023.04.06 04:04:04.739477 [ 502 ] {} <Fatal> BaseDaemon: 16. ./build_docker/./src/Common/ThreadPool.cpp:398: ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x29cd4bf2 in /workspace/clickhouse
2023.04.06 04:04:04.975646 [ 502 ] {} <Fatal> BaseDaemon: 17. ./build_docker/./src/Common/ThreadPool.cpp:0: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x29ce5d23 in /workspace/clickhouse
2023.04.06 04:04:04.975794 [ 502 ] {} <Fatal> BaseDaemon: 18. ? @ 0x7f96aa729609 in ?
2023.04.06 04:04:04.975869 [ 502 ] {} <Fatal> BaseDaemon: 19. clone @ 0x7f96aa64e133 in ?
2023.04.06 04:04:07.935735 [ 502 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: A45E3C6F7F2CD1160520AB34E02EABA7)
2023.04.06 04:04:31.887038 [ 162 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
| https://github.com/ClickHouse/ClickHouse/issues/48496 | https://github.com/ClickHouse/ClickHouse/pull/57414 | 9c2ef4eae5d06deef52f66ccb8406cd64cd51703 | ae003bcc437ff84feb950660ad97048c36cc0ec2 | "2023-04-06T15:39:00Z" | c++ | "2023-12-17T19:51:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,493 | ["src/Core/Settings.h", "src/Interpreters/executeQuery.cpp", "src/Storages/IStorage.h", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/MergeTree/MergeTreeSettings.h", "tests/queries/0_stateless/02725_async_insert_table_setting.reference", "tests/queries/0_stateless/02725_async_insert_table_setting.sh"] | Makes Async Inserts a merge tree level setting |
**Use case**
One of the main challenges of supporting ClickHouse. This is the optimal user experience - rare for users need to mix insert workload types on the same table.
**Describe the solution you'd like**
Merge Tree level setting. Set on table creation time.
| https://github.com/ClickHouse/ClickHouse/issues/48493 | https://github.com/ClickHouse/ClickHouse/pull/49122 | 17d6e2cc667bfd058b6622f0d5993941bbdf5911 | 7896d307379bc813665fa5b11d08c202ea67f4fb | "2023-04-06T14:51:05Z" | c++ | "2023-05-03T14:01:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,462 | ["src/AggregateFunctions/AggregateFunctionGroupArray.cpp", "src/AggregateFunctions/AggregateFunctionNull.cpp", "tests/queries/0_stateless/00529_orantius.reference", "tests/queries/0_stateless/01664_array_slice_ubsan.reference", "tests/queries/0_stateless/02713_group_array_nullable.reference", "tests/queries/0_stateless/02713_group_array_nullable.sql"] | Should `returns_default_when_only_null` be true for `groupArray` | > Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in [community Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-1gh9ds7f4-PgDhJAaF8ad5RbWBAAjzFg) is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
> If you still prefer GitHub issues, remove all this text and ask your question here.
Since `array` type could not be null, should the property `returns_default_when_only_null` for `groupArray` be true ?
```C++
void registerAggregateFunctionGroupArray(AggregateFunctionFactory & factory)
{
AggregateFunctionProperties properties = { .returns_default_when_only_null = false, .is_order_dependent = true };
factory.registerFunction("groupArray", { createAggregateFunctionGroupArray<false>, properties });
factory.registerFunction("groupArraySample", { createAggregateFunctionGroupArraySample, properties });
factory.registerFunction("groupArrayLast", { createAggregateFunctionGroupArray<true>, properties });
}
``` | https://github.com/ClickHouse/ClickHouse/issues/48462 | https://github.com/ClickHouse/ClickHouse/pull/48593 | 73741650a7ddd6462d76f4a15cce2a9fe372855c | ea5339ed4a15d6dbda5d94a6bc174a280c276dca | "2023-04-06T09:16:43Z" | c++ | "2023-04-14T19:31:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,460 | ["tests/queries/0_stateless/02834_analyzer_with_statement_references.reference", "tests/queries/0_stateless/02834_analyzer_with_statement_references.sql"] | Analyzer: With statements references not working | When using the analyzer I can't reference a CTE from another one. This works when disabling the analyzer:
Analyzer:
```sql
WITH
test_aliases AS
(
SELECT *
FROM numbers(20)
),
alias2 AS
(
SELECT *
FROM test_aliases
)
SELECT *
FROM alias2
FORMAT `Null`
SETTINGS allow_experimental_analyzer = 1
0 rows in set. Elapsed: 0.211 sec.
Received exception from server (version 23.4.1):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Unknown table expression identifier 'test_aliases' in scope alias2. (UNKNOWN_TABLE)
```
No analyzer:
```sql
WITH
test_aliases AS
(
SELECT *
FROM numbers(20)
),
alias2 AS
(
SELECT *
FROM test_aliases
)
SELECT *
FROM alias2
FORMAT `Null`
SETTINGS allow_experimental_analyzer = 0
Query id: afa5ddf8-8afd-413f-947d-1da914310b40
Ok.
```
Version:
```sql
ββversion()ββ
β 23.4.1.1 β
βββββββββββββ
```
Relevant settings:
```sql
ββnameβββββββββββββββββββββββββββββββββββββββββββ¬βvalueβββββββ
β max_threads β 16 β
β use_uncompressed_cache β 0 β
β compile_aggregate_expressions β 0 β
β insert_deduplicate β 1 β
β http_wait_end_of_query β 1 β
β http_response_buffer_size β 104857600 β
β joined_subquery_requires_alias β 0 β
β allow_experimental_analyzer β 1 β
β max_bytes_before_external_group_by β 1442450940 β
β max_execution_time β 0 β
β max_expanded_ast_elements β 50000 β
β max_memory_usage β 5000000000 β
β memory_usage_overcommit_max_wait_microseconds β 50000 β
β log_query_threads β 0 β
β max_partitions_per_insert_block β 100 β
β enable_lightweight_delete β 0 β
β optimize_monotonous_functions_in_order_by β 0 β
β enable_global_with_statement β 0 β
β optimize_rewrite_sum_if_to_count_if β 0 β
β insert_keeper_max_retries β 15 β
β insert_keeper_retry_initial_backoff_ms β 100 β
β insert_keeper_retry_max_backoff_ms β 2000 β
β input_format_null_as_default β 0 β
βββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/48460 | https://github.com/ClickHouse/ClickHouse/pull/52875 | 762d5a2ce83c96e81d7cbd036cabe31ee931a2e0 | 8c485f315372c20031f11e10878188d1526516e8 | "2023-04-06T08:27:33Z" | c++ | "2023-08-25T16:55:30Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,418 | ["base/base/IPv4andIPv6.h", "src/AggregateFunctions/Helpers.h", "src/Common/HashTable/Hash.h", "src/Common/HashTable/HashTable.h", "src/DataTypes/DataTypeIPv4andIPv6.h", "src/Processors/Merges/Algorithms/SummingSortedAlgorithm.cpp", "tests/queries/0_stateless/02710_aggregation_nested_map_ip_uuid.reference", "tests/queries/0_stateless/02710_aggregation_nested_map_ip_uuid.sql"] | On a SummingMergeTree, a Map with an IPv4/IPv6 field does not sum. | Hello and thanks for your work on ClickHouse!
**Describe what's wrong**
I have found a bug reproducible on the latest available CH build.
I don't know if it's related to #39965 since I don't have the exact same behaviour.
On a SummingMergeTree, a Map with an IPv4/IPv6 field does not sum.
```
ClickHouse client version 23.4.1.375 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 23.4.1 revision 54462.
---
clickhouse :) CREATE TABLE summing_table
(
`id` UInt32,
`ipMap.value` Array(IPv4) DEFAULT [],
`ipMap.total` Array(UInt32) DEFAULT [],
`intMap.value` Array(UInt8) DEFAULT [],
`intMap.total` Array(UInt32) DEFAULT [],
)
ENGINE = SummingMergeTree
ORDER BY id;
---
clickhouse :) insert into summing_table(id, ipMap.value, ipMap.total, intMap.value, intMap.total) values(1, ['10.20.30.40'], [1], [123], [10]);
clickhouse :) SELECT * FROM summing_table
ββidββ¬βipMap.valueββββββ¬βipMap.totalββ¬βintMap.valueββ¬βintMap.totalββ
β 1 β ['10.20.30.40'] β [1] β [123] β [10] β
ββββββ΄ββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
---
clickhouse :) insert into summing_table(id, ipMap.value, ipMap.total, intMap.value, intMap.total) values(1, ['10.20.30.40'], [1], [123], [10]);
clickhouse :) OPTIMIZE TABLE summing_table FINAL
clickhouse :) SELECT * FROM summing_table
ββidββ¬βipMap.valueββββββ¬βipMap.totalββ¬βintMap.valueββ¬βintMap.totalββ
β 1 β ['10.20.30.40'] β [1] β [123] β [20] β
ββββββ΄ββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Expected :
ββidββ¬βipMap.valueββββββ¬βipMap.totalββ¬βintMap.valueββ¬βintMap.totalββ
β 1 β ['10.20.30.40'] β [2] β [123] β [20] β
ββββββ΄ββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
---
clickhouse :) insert into summing_table(id, ipMap.value, ipMap.total, intMap.value, intMap.total) values(1, ['50.60.70.80'], [10], [124], [50]);
clickhouse :) OPTIMIZE TABLE summing_table FINAL
clickhouse :) SELECT * FROM summing_table
ββidββ¬βipMap.valueββββββ¬βipMap.totalββ¬βintMap.valueββ¬βintMap.totalββ
β 1 β ['10.20.30.40'] β [1] β [123,124] β [20,50] β
ββββββ΄ββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Expected :
ββidββ¬βipMap.valueβββββββββββββββββββββ¬βipMap.totalββ¬βintMap.valueββ¬βintMap.totalββ
β 1 β ['10.20.30.40', '50.60.70.80'] β [2, 10] β [123,124] β [20,50] β
ββββββ΄βββββββββββββββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
```
**Does it reproduce on recent release?**
Reproducible on ClickHouse server version 23.4.1 revision 54462.
It worked well on Clickhouse 22
| https://github.com/ClickHouse/ClickHouse/issues/48418 | https://github.com/ClickHouse/ClickHouse/pull/48556 | 3993aef8e281815ac4269d44e27bb1dcdcff21cb | cd88024a33c0324e96e2736669ad4a134129a2ea | "2023-04-05T09:27:00Z" | c++ | "2023-04-18T11:03:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,394 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "src/Functions/parseDateTime.cpp", "tests/queries/0_stateless/02668_parse_datetime.reference", "tests/queries/0_stateless/02668_parse_datetime.sql"] | MySQL compatibility: format is not supported for fractional seconds | **Describe the unexpected behaviour**
ClickHouse complains about the usage of formatter `%f` which stands for fractional seconds (i.e. microseconds) in MySQL.
**How to reproduce**
* Which ClickHouse server version to use: the latest main branch version as of the 3th of April 2023 (`23.4.1.170`)
* Which interface to use, if matters: MySQL
* `CREATE TABLE` statements for all tables involved: use [cell towers](https://clickhouse.com/docs/en/getting-started/example-datasets/cell-towers) dataset, then query via MySQL protocol
* Queries to run that lead to unexpected result
```
SELECT CAST(CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s') AS DATETIME) AS DATE) AS qt_a52bvfcv4c, COUNT(1) AS qt_up6h9rbv4c, area FROM cell_towers WHERE CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s') AS DATETIME) BETWEEN STR_TO_DATE('2015-04-04T16:30:00.000000', '%Y-%m-%dT%H:%i:%s.%f') AND STR_TO_DATE('2017-04-04T16:00:00.000000', '%Y-%m-%dT%H:%i:%s.%f') GROUP BY qt_a52bvfcv4c, area;
```
**Expected behavior**
Date formatter works with fractional seconds
**Error message and/or stacktrace**
```
SELECT CAST(CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s') AS DATETIME) AS DATE) AS qt_a52bvfcv4c, COUNT(1) AS qt_up6h9rbv4c, area FROM cell_towers WHERE CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s') AS DATETIME) BETWEEN STR_TO_DATE('2015-04-04T16:30:00.000000', '%Y-%m-%dT%H:%i:%s.%f') AND STR_TO_DATE('2017-04-04T16:00:00.000000', '%Y-%m-%dT%H:%i:%s.%f') GROUP BY qt_a52bvfcv4c, area; (stage: Complete)
2023.04.04 13:56:55.225668 [ 293 ] {mysql:41:57f0867a-4037-4b66-95c3-bd7b01a062c9} <Error> executeQuery: Code: 48. DB::Exception: format is not supported for fractional seconds: While processing SELECT CAST(CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s'), 'DATETIME'), 'DATE') AS qt_a52bvfcv4c, count() AS qt_up6h9rbv4c, area FROM cell_towers WHERE (CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s'), 'DATETIME') >= parseDateTimeOrNull('2015-04-04T16:30:00.000000', '%Y-%m-%dT%H:%i:%s.%f')) AND (CAST(DATE_FORMAT(created, '%Y-%m-%d %H:%i:%s'), 'DATETIME') <= parseDateTimeOrNull('2017-04-04T16:00:00.000000', '%Y-%m-%dT%H:%i:%s.%f')) GROUP BY qt_a52bvfcv4c, area LIMIT 150042. (NOT_IMPLEMENTED) (version 23.4.1.170 (official build)) (from 74.125.88.45:46416)
```
CC @rschu1ze
| https://github.com/ClickHouse/ClickHouse/issues/48394 | https://github.com/ClickHouse/ClickHouse/pull/48420 | f82d7789af05f04094025d7fbcc1a00d7697d00d | 4e52dc672eb4aee754f080d41daaaf2feb45b0bf | "2023-04-04T15:53:50Z" | c++ | "2023-04-11T12:12:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,385 | ["src/Common/Exception.cpp"] | use-of-uninitialized-value in QueryCache::Writer | https://s3.amazonaws.com/clickhouse-test-reports/48122/e1e508f8ec200a0e5acd7e33726c36303032972d/stress_test__msan_.html
```
==1656==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x64215a7b in LZ4_count build_docker/./contrib/lz4/lib/lz4.c:625:13
#1 0x64215a7b in LZ4_compress_generic_validated build_docker/./contrib/lz4/lib/lz4.c:1103:29
#2 0x64215a7b in LZ4_compress_generic build_docker/./contrib/lz4/lib/lz4.c:1289:12
#3 0x64215a7b in LZ4_compress_fast_extState build_docker/./contrib/lz4/lib/lz4.c:1304:20
#4 0x64222aeb in LZ4_compress_fast build_docker/./contrib/lz4/lib/lz4.c:1376:14
#5 0x64222aeb in LZ4_compress_default build_docker/./contrib/lz4/lib/lz4.c:1387:12
#6 0x48becc17 in DB::ColumnCompressed::compressBuffer(void const*, unsigned long, bool) build_docker/./src/Columns/ColumnCompressed.cpp:27:27
#7 0x48f1c63a in DB::ColumnVector<unsigned long>::compress() const build_docker/./src/Columns/ColumnVector.cpp:926:23
#8 0x487121ac in DB::ColumnArray::compress() const build_docker/./src/Columns/ColumnArray.cpp:939:39
#9 0x4853228e in DB::QueryCache::Writer::finalizeWrite() build_docker/./src/Interpreters/Cache/QueryCache.cpp:285:63
#10 0x4cf73f1b in DB::StreamInQueryCacheTransform::finalizeWriteInQueryCache() build_docker/./src/Processors/Transforms/StreamInQueryCacheTransform.cpp:26:22
#11 0x430cd175 in DB::QueryPipeline::finalizeWriteInQueryCache() build_docker/./src/QueryPipeline/QueryPipeline.cpp:597:59
#12 0x483e8c98 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3::operator()(DB::QueryPipeline&) build_docker/./src/Interpreters/executeQuery.cpp:939:36
#13 0x483e8c98 in decltype(std::declval<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&>()(std::declval<DB::QueryPipeline&>())) std::__1::__invoke[abi:v15000]<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#14 0x483e8c98 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#15 0x483e8c98 in std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#16 0x483e8c98 in void std::__1::__function::__policy_invoker<void (DB::QueryPipeline&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>>(std::__1::__function::__policy_storage const*, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#17 0x43073df3 in std::__1::__function::__policy_func<void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#18 0x43073df3 in std::__1::function<void (DB::QueryPipeline&)>::operator()(DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#19 0x43073df3 in DB::BlockIO::onFinish() build_docker/./src/QueryPipeline/BlockIO.cpp:57:9
#20 0x4c26063e in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp
#21 0x4c2a62d9 in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2038:9
#22 0x58a5fddd in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#23 0x58a6116e in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#24 0x591c206b in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#25 0x591bd601 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#26 0x591b9328 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#27 0x7f5d65f67608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466)
#28 0x7f5d65e8c132 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x11f132) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
Uninitialized value was created by a heap allocation
#0 0xc988420 in malloc (/usr/bin/clickhouse+0xc988420) (BuildId: f8a908ed51f5d37f9c371bbf43ef3de50516c9b3)
#1 0x29717d07 in Allocator<false, false>::allocNoTrack(unsigned long, unsigned long) build_docker/./src/Common/Allocator.h:237:27
#2 0x2971775b in Allocator<false, false>::alloc(unsigned long, unsigned long) build_docker/./src/Common/Allocator.h:103:16
#3 0xca0525e in void DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::resize<>(unsigned long) (/usr/bin/clickhouse+0xca0525e) (BuildId: f8a908ed51f5d37f9c371bbf43ef3de50516c9b3)
#4 0x48f0e7cd in DB::ColumnVector<unsigned long>::insertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) build_docker/./src/Columns/ColumnVector.cpp:475:10
#5 0x48705bcc in DB::ColumnArray::insertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) build_docker/./src/Columns/ColumnArray.cpp:532:15
#6 0x4c363944 in DB::Chunk::append(DB::Chunk const&, unsigned long, unsigned long) build_docker/./src/Processors/Chunk.cpp:183:36
#7 0x48530bee in DB::QueryCache::Writer::finalizeWrite() build_docker/./src/Interpreters/Cache/QueryCache.cpp:263:40
#8 0x4cf73f1b in DB::StreamInQueryCacheTransform::finalizeWriteInQueryCache() build_docker/./src/Processors/Transforms/StreamInQueryCacheTransform.cpp:26:22
#9 0x430cd175 in DB::QueryPipeline::finalizeWriteInQueryCache() build_docker/./src/QueryPipeline/QueryPipeline.cpp:597:59
#10 0x483e8c98 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3::operator()(DB::QueryPipeline&) build_docker/./src/Interpreters/executeQuery.cpp:939:36
#11 0x483e8c98 in decltype(std::declval<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&>()(std::declval<DB::QueryPipeline&>())) std::__1::__invoke[abi:v15000]<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#12 0x483e8c98 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#13 0x483e8c98 in std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#14 0x483e8c98 in void std::__1::__function::__policy_invoker<void (DB::QueryPipeline&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>>(std::__1::__function::__policy_storage const*, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#15 0x43073df3 in std::__1::__function::__policy_func<void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#16 0x43073df3 in std::__1::function<void (DB::QueryPipeline&)>::operator()(DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#17 0x43073df3 in DB::BlockIO::onFinish() build_docker/./src/QueryPipeline/BlockIO.cpp:57:9
#18 0x4c26063e in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp
#19 0x4c2a62d9 in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2038:9
#20 0x58a5fddd in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#21 0x58a6116e in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#22 0x591c206b in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#23 0x591bd601 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#24 0x591b9328 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#25 0x7f5d65f67608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466)
SUMMARY: MemorySanitizer: use-of-uninitialized-value build_docker/./contrib/lz4/lib/lz4.c:625:13 in LZ4_count
Exiting
Uninitialized bytes in __interceptor_write at offset 0 inside [0x7f57da815c50, 392)
==1656==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x299de4b6 in DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl() build_docker/./src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16:23
#1 0x29cfa3b9 in DB::WriteBuffer::next() build_docker/./src/IO/WriteBuffer.h:49:13
#2 0x2a16a8a3 in sanitizerDeathCallback() build_docker/./src/Daemon/BaseDaemon.cpp:447:9
#3 0xc968215 in __sanitizer::Die() crtstuff.c
#4 0xc97cdb2 in __msan_warning_with_origin_noreturn (/usr/bin/clickhouse+0xc97cdb2) (BuildId: f8a908ed51f5d37f9c371bbf43ef3de50516c9b3)
#5 0x64215a7b in LZ4_count build_docker/./contrib/lz4/lib/lz4.c:625:13
#6 0x64215a7b in LZ4_compress_generic_validated build_docker/./contrib/lz4/lib/lz4.c:1103:29
#7 0x64215a7b in LZ4_compress_generic build_docker/./contrib/lz4/lib/lz4.c:1289:12
#8 0x64215a7b in LZ4_compress_fast_extState build_docker/./contrib/lz4/lib/lz4.c:1304:20
#9 0x64222aeb in LZ4_compress_fast build_docker/./contrib/lz4/lib/lz4.c:1376:14
#10 0x64222aeb in LZ4_compress_default build_docker/./contrib/lz4/lib/lz4.c:1387:12
#11 0x48becc17 in DB::ColumnCompressed::compressBuffer(void const*, unsigned long, bool) build_docker/./src/Columns/ColumnCompressed.cpp:27:27
#12 0x48f1c63a in DB::ColumnVector<unsigned long>::compress() const build_docker/./src/Columns/ColumnVector.cpp:926:23
#13 0x487121ac in DB::ColumnArray::compress() const build_docker/./src/Columns/ColumnArray.cpp:939:39
#14 0x4853228e in DB::QueryCache::Writer::finalizeWrite() build_docker/./src/Interpreters/Cache/QueryCache.cpp:285:63
#15 0x4cf73f1b in DB::StreamInQueryCacheTransform::finalizeWriteInQueryCache() build_docker/./src/Processors/Transforms/StreamInQueryCacheTransform.cpp:26:22
#16 0x430cd175 in DB::QueryPipeline::finalizeWriteInQueryCache() build_docker/./src/QueryPipeline/QueryPipeline.cpp:597:59
#17 0x483e8c98 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3::operator()(DB::QueryPipeline&) build_docker/./src/Interpreters/executeQuery.cpp:939:36
#18 0x483e8c98 in decltype(std::declval<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&>()(std::declval<DB::QueryPipeline&>())) std::__1::__invoke[abi:v15000]<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#19 0x483e8c98 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#20 0x483e8c98 in std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#21 0x483e8c98 in void std::__1::__function::__policy_invoker<void (DB::QueryPipeline&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>>(std::__1::__function::__policy_storage const*, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#22 0x43073df3 in std::__1::__function::__policy_func<void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#23 0x43073df3 in std::__1::function<void (DB::QueryPipeline&)>::operator()(DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#24 0x43073df3 in DB::BlockIO::onFinish() build_docker/./src/QueryPipeline/BlockIO.cpp:57:9
#25 0x4c26063e in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp
#26 0x4c2a62d9 in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2038:9
#27 0x58a5fddd in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#28 0x58a6116e in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#29 0x591c206b in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#30 0x591bd601 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#31 0x591b9328 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#32 0x7f5d65f67608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466)
#33 0x7f5d65e8c132 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x11f132) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)
Uninitialized value was stored to memory at
#0 0xc981bd9 in __msan_memcpy (/usr/bin/clickhouse+0xc981bd9) (BuildId: f8a908ed51f5d37f9c371bbf43ef3de50516c9b3)
#1 0xca8395d in DB::WriteBuffer::write(char const*, unsigned long) (/usr/bin/clickhouse+0xca8395d) (BuildId: f8a908ed51f5d37f9c371bbf43ef3de50516c9b3)
#2 0x2a1720b0 in void DB::writePODBinary<int>(int const&, DB::WriteBuffer&) build_docker/./src/IO/WriteHelpers.h:85:9
#3 0x2a1720b0 in void DB::writeBinary<int>(int const&, DB::WriteBuffer&) build_docker/./src/IO/WriteHelpers.h:853:59
#4 0x2a16a850 in sanitizerDeathCallback() build_docker/./src/Daemon/BaseDaemon.cpp:442:5
#5 0xc968215 in __sanitizer::Die() crtstuff.c
#6 0x64215a7b in LZ4_count build_docker/./contrib/lz4/lib/lz4.c:625:13
#7 0x64215a7b in LZ4_compress_generic_validated build_docker/./contrib/lz4/lib/lz4.c:1103:29
#8 0x64215a7b in LZ4_compress_generic build_docker/./contrib/lz4/lib/lz4.c:1289:12
#9 0x64215a7b in LZ4_compress_fast_extState build_docker/./contrib/lz4/lib/lz4.c:1304:20
#10 0x64222aeb in LZ4_compress_fast build_docker/./contrib/lz4/lib/lz4.c:1376:14
#11 0x64222aeb in LZ4_compress_default build_docker/./contrib/lz4/lib/lz4.c:1387:12
#12 0x48becc17 in DB::ColumnCompressed::compressBuffer(void const*, unsigned long, bool) build_docker/./src/Columns/ColumnCompressed.cpp:27:27
#13 0x48f1c63a in DB::ColumnVector<unsigned long>::compress() const build_docker/./src/Columns/ColumnVector.cpp:926:23
#14 0x487121ac in DB::ColumnArray::compress() const build_docker/./src/Columns/ColumnArray.cpp:939:39
#15 0x4853228e in DB::QueryCache::Writer::finalizeWrite() build_docker/./src/Interpreters/Cache/QueryCache.cpp:285:63
#16 0x4cf73f1b in DB::StreamInQueryCacheTransform::finalizeWriteInQueryCache() build_docker/./src/Processors/Transforms/StreamInQueryCacheTransform.cpp:26:22
#17 0x430cd175 in DB::QueryPipeline::finalizeWriteInQueryCache() build_docker/./src/QueryPipeline/QueryPipeline.cpp:597:59
#18 0x483e8c98 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3::operator()(DB::QueryPipeline&) build_docker/./src/Interpreters/executeQuery.cpp:939:36
#19 0x483e8c98 in decltype(std::declval<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&>()(std::declval<DB::QueryPipeline&>())) std::__1::__invoke[abi:v15000]<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#20 0x483e8c98 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&>(DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3&, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#21 0x483e8c98 in std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#22 0x483e8c98 in void std::__1::__function::__policy_invoker<void (DB::QueryPipeline&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_3, void (DB::QueryPipeline&)>>(std::__1::__function::__policy_storage const*, DB::QueryPipeline&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#23 0x43073df3 in std::__1::__function::__policy_func<void (DB::QueryPipeline&)>::operator()[abi:v15000](DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#24 0x43073df3 in std::__1::function<void (DB::QueryPipeline&)>::operator()(DB::QueryPipeline&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#25 0x43073df3 in DB::BlockIO::onFinish() build_docker/./src/QueryPipeline/BlockIO.cpp:57:9
#26 0x4c26063e in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp
#27 0x4c2a62d9 in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2038:9
#28 0x58a5fddd in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#29 0x58a6116e in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#30 0x591c206b in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
Memory was marked as uninitialized
#0 0xc98874d in __sanitizer_dtor_callback (/usr/bin/clickhouse+0xc98874d) (BuildId: f8a908ed51f5d37f9c371bbf43ef3de50516c9b3)
#1 0x29bfc86d in std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::~basic_string() build_docker/./contrib/llvm-project/libcxx/include/string:2335:1
#2 0x29bfc86d in DB::QueryLogElement::~QueryLogElement() build_docker/./src/Interpreters/QueryLog.h:30:8
#3 0x483e4000 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*)::$_4::~$_4() build_docker/./src/Interpreters/executeQuery.cpp:1060:39
#4 0x483c2b77 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:1139:9
#5 0x483a94bd in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1166:30
#6 0x4c260249 in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:420:24
#7 0x4c2a62d9 in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2038:9
#8 0x58a5fddd in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#9 0x58a6116e in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#10 0x591c206b in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#11 0x591bd601 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#12 0x591b9328 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#13 0x7f5d65f67608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466)
SUMMARY: MemorySanitizer: use-of-uninitialized-value build_docker/./src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16:23 in DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl()
``` | https://github.com/ClickHouse/ClickHouse/issues/48385 | https://github.com/ClickHouse/ClickHouse/pull/49316 | 6d3559e8170f4d5fe34d481c8ed26ca2a8871491 | b5a57da4ce5223a994d585dc9a09a43440726b0f | "2023-04-04T13:08:04Z" | c++ | "2023-05-04T13:17:30Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,380 | ["tests/queries/0_stateless/02432_s3_parallel_parts_cleanup.sql"] | Flaky test `02432_s3_parallel_parts_cleanup` | Link: https://s3.amazonaws.com/clickhouse-test-reports/0/77e0806ca43d59c4c5f50d3c001e96f5fd9db2fd/stateless_tests__release__databaseordinary_.html | https://github.com/ClickHouse/ClickHouse/issues/48380 | https://github.com/ClickHouse/ClickHouse/pull/48865 | 96d2482bff2100ffd0f0fb946cd509458feb16f4 | fe81b1f9c1b2eb5ef027723121fceb49e363e21b | "2023-04-04T12:10:06Z" | c++ | "2023-05-02T09:37:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,367 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Processors/QueryPlan/ReadFromMergeTree.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/02708_parallel_replicas_not_found_column.reference", "tests/queries/0_stateless/02708_parallel_replicas_not_found_column.sql", "tests/queries/0_stateless/02709_parallel_replicas_with_final_modifier.reference", "tests/queries/0_stateless/02709_parallel_replicas_with_final_modifier.sql"] | Attempting to upgrade to 23.2 or 23.3.. version results in Not found column count() in block on aggregations | I was attempting to upgrade clickhouse from version 23.1 to either 23.2 or 23.3 but all aggregation queries are failing. select * works. Rolled back and it works fine. below is a result of doing a select count() from table..
```:Exception: Not found column count() in block. There are only columns: . (NOT_FOUND_COLUMN_IN_BLOCK)```
| https://github.com/ClickHouse/ClickHouse/issues/48367 | https://github.com/ClickHouse/ClickHouse/pull/48433 | 14d6373bc75b548e13d8f48dead3c8048ed19776 | 3ad0a6ac1861292ef3e74d2738a7c67f44b0932d | "2023-04-04T02:50:35Z" | c++ | "2023-04-06T00:00:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,318 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "tests/queries/0_stateless/02174_cte_scalar_cache.sql", "tests/queries/0_stateless/02483_elapsed_time.sh", "tests/queries/0_stateless/02841_with_clause_resolve.reference", "tests/queries/0_stateless/02841_with_clause_resolve.sql"] | [Analyzer] Bad cast in a simple query | **Describe what's wrong**
```
milovidov@milovidov-desktop:~$ clickhouse-local
ClickHouse local version 23.3.1.2537.
milovidov-desktop :) SET allow_experimental_analyzer = 1
SET allow_experimental_analyzer = 1
Query id: 857ffa3c-786e-4f98-832a-bc100a08dcce
Ok.
0 rows in set. Elapsed: 0.000 sec.
milovidov-desktop :) WITH
-- Input
44100 AS sample_frequency
, number AS tick
, tick / sample_frequency AS time
-- Output control
, 1 AS master_volume
, level -> least(1.0, greatest(-1.0, level)) AS clamp
, level -> (clamp(level) * 0x7FFF * master_volume)::Int16 AS output
, x -> (x, x) AS mono
-- Basic waves
, time -> sin(time * 2 * pi()) AS sine_wave
, time -> time::UInt64 % 2 * 2 - 1 AS square_wave
, time -> (time - floor(time)) * 2 - 1 AS sawtooth_wave
, time -> abs(sawtooth_wave(time)) * 2 - 1 AS triangle_wave
-- Helpers
, (from, to, wave, time) -> from + ((wave(time) + 1) / 2) * (to - from) AS lfo
, (from, to, steps, time) -> from + floor((time - floor(time)) * steps) / steps * (to - from) AS step_lfo
, (from, to, steps, time) -> exp(step_lfo(log(from), log(to), steps, time)) AS exp_step_lfo
-- Noise
, time -> cityHash64(time) / 0xFFFFFFFFFFFFFFFF AS uniform_noise
, time -> erf(uniform_noise(time)) AS white_noise
, time -> cityHash64(time) % 2 ? 1 : -1 AS bernoulli_noise
-- Distortion
, (x, amount) -> clamp(x * amount) AS clipping
, (x, amount) -> clamp(x > 0 ? pow(x, amount) : -pow(-x, amount)) AS power_distortion
, (x, amount) -> round(x * exp2(amount)) / exp2(amount) AS bitcrush
, (time, sample_frequency) -> round(time * sample_frequency) / sample_frequency AS desample
, (time, wave, amount) -> (time - floor(time) < (1 - amount)) ? wave(time * (1 - amount)) : 0 AS thin
, (time, wave, amount) -> wave(floor(time) + pow(time - floor(time), amount)) AS skew
-- Combining
, (a, b, weight) -> a * (1 - weight) + b * weight AS combine
-- Envelopes
, (time, offset, attack, hold, release) ->
time < offset ? 0
: (time < offset + attack ? ((time - offset) / attack)
: (time < offset + attack + hold ? 1
: (time < offset + attack + hold + release ? (offset + attack + hold + release - time) / release
: 0))) AS envelope
, (bpm, time, offset, attack, hold, release) ->
envelope(
time * (bpm / 60) - floor(time * (bpm / 60)),
offset,
attack,
WITH
-- Input
44100 AS sample_frequency
, number AS tick
, tick / sample_frequency AS time
-- Output control
, 1 AS master_volume
, level -> least(1.0, greatest(-1.0, level)) AS clamp
, level -> (clamp(level) * 0x7FFF * master_volume)::Int16 AS output
, x -> (x, x) AS mono
-- Basic waves
, time -> sin(time * 2 * pi()) AS sine_wave
, time -> time::UInt64 % 2 * 2 - 1 AS square_wave
, time -> (time - floor(time)) * 2 - 1 AS sawtooth_wave
, time -> abs(sawtooth_wave(time)) * 2 - 1 AS triangle_wave
-- Helpers
, (from, to, wave, time) -> from + ((wave(time) + 1) / 2) * (to - from) AS lfo
, (from, to, steps, time) -> from + floor((time - floor(time)) * steps) / steps * (to - from) AS step_lfo
, (from, to, steps, time) -> exp(step_lfo(log(from), log(to), steps, time)) AS exp_step_lfo
-- Noise
, time -> cityHash64(time) / 0xFFFFFFFFFFFFFFFF AS uniform_noise
, time -> erf(uniform_noise(time)) AS white_noise
, time -> cityHash64(time) % 2 ? 1 : -1 AS bernoulli_noise
-- Distortion
, (x, amount) -> clamp(x * amount) AS clipping
, (x, amount) -> clamp(x > 0 ? pow(x, amount) : -pow(-x, amount)) AS power_distortion
, (x, amount) -> round(x * exp2(amount)) / exp2(amount) AS bitcrush
, (time, sample_frequency) -> round(time * sample_frequency) / sample_frequency AS desample
, (time, wave, amount) -> (time - floor(time) < (1 - amount)) ? wave(time * (1 - amount)) : 0 AS thin
, (time, wave, amount) -> wave(floor(time) + pow(time - floor(time), amount)) AS skew
-- Combining
, (a, b, weight) -> a * (1 - weight) + b * weight AS combine
-- Envelopes
, (time, offset, attack, hold, release) ->
time < offset ? 0
: (time < offset + attack ? ((time - offset) / attack)
: (time < offset + attack + hold ? 1
: (time < offset + attack + hold + release ? (offset + attack + hold + release - time) / release
: 0))) AS envelope
, (bpm, time, offset, attack, hold, release) ->
envelope(
time * (bpm / 60) - floor(time * (bpm / 60)),
offset,
attack,
hold,
release) AS running_envelope
-- Sequencers
, (sequence, time) -> sequence[1 + time::UInt64 % length(sequence)] AS sequencer
-- Delay
, (time, wave, delay, decay, count) -> arraySum(n -> wave(time - delay * n) * pow(decay, n), range(count)) AS delay
, delay(time, (time -> power_distortion(sine_wave(time * 80 + sine_wave(time * 2)), lfo(0.5, 1, sine_wave, time / 16))
* running_envelope(60, time, 0, 0.0, 0.01, 0.1)),
0.2, 0.5, 5) AS kick
SELECT
(output(
kick +
delay(time, (time ->
power_distortion(
sine_wave(time * 50 + 1 * sine_wave(time * 100 + 1/4))
* running_envelope(60, time, 0, 0.01, 0.01, 0.1),
lfo(1, 0.75, triangle_wave, time / 8))),
0.2, 0.5, 10)
* lfo(0.5, 1, triangle_wave, time / 7)
+ delay(time, (time ->
power_distortion(
sine_wave(time * sequencer([50, 100, 200, 400], time / 2) + 1 * sine_wave(time * sequencer([50, 100, 200], time / 4) + 1/4))
* running_envelope(60, time, 0.5, 0.01, 0.01, 0.1),
lfo(1, 0.75, triangle_wave, time / 8))),
0.2, 0.5, 10)
* lfo(0.5, 1, triangle_wave, 16 + time / 11)
+ delay(time, (time ->
white_noise(time) * running_envelope(60, time, 0.75, 0.01, 0.01, 0.1)),
0.2, 0.5, 10)
* lfo(0.5, 1, triangle_wave, 24 + time / 13)
+ sine_wave(time * 100 + 1 * sine_wave(time * 10 + 1/4))
* running_envelope(120, time, 0, 0.01, 0.01, 0.1)
),
output(
kick +
delay(time + 0.01, (time ->
power_distortion(
sine_wave(time * 50 + 1 * sine_wave(time * 100 + 1/4))
* running_envelope(60, time, 0, 0.01, 0.01, 0.1),
lfo(1, 0.75, triangle_wave, time / 8))),
0.2, 0.5, 10)
* lfo(0.5, 1, triangle_wave, time / 7)
+ delay(time - 0.01, (time ->
power_distortion(
sine_wave(time * sequencer([50, 100, 200, 400], time / 2) + 1 * sine_wave(time * sequencer([50, 100, 200], time / 4) + 1/4))
* running_envelope(60, time, 0.5, 0.01, 0.01, 0.1),
lfo(1, 0.75, triangle_wave, time / 8))),
0.2, 0.5, 10)
* lfo(0.5, 1, triangle_wave, 16 + time / 11)
+ delay(time + 0.005, (time ->
white_noise(time) * running_envelope(60, time, 0.75, 0.01, 0.01, 0.1)),
0.2, 0.5, 10)
* lfo(0.5, 1, triangle_wave, 24 + time / 13)
))
FROM system.numbers;
WITH
44100 AS sample_frequency,
number AS tick,
tick / sample_frequency AS time,
1 AS master_volume,
level -> least(1., greatest(-1., level)) AS clamp,
level -> CAST((clamp(level) * 32767) * master_volume, 'Int16') AS output,
x -> (x, x) AS mono,
time -> sin((time * 2) * pi()) AS sine_wave,
time -> (((CAST(time, 'UInt64') % 2) * 2) - 1) AS square_wave,
time -> (((time - floor(time)) * 2) - 1) AS sawtooth_wave,
time -> ((abs(sawtooth_wave(time)) * 2) - 1) AS triangle_wave,
(from, to, wave, time) -> (from + (((wave(time) + 1) / 2) * (to - from))) AS lfo,
(from, to, steps, time) -> (from + ((floor((time - floor(time)) * steps) / steps) * (to - from))) AS step_lfo,
(from, to, steps, time) -> exp(step_lfo(log(from), log(to), steps, time)) AS exp_step_lfo,
time -> (cityHash64(time) / 18446744073709551615) AS uniform_noise,
time -> erf(uniform_noise(time)) AS white_noise,
time -> if(cityHash64(time) % 2, 1, -1) AS bernoulli_noise,
(x, amount) -> clamp(x * amount) AS clipping,
(x, amount) -> clamp(if(x > 0, pow(x, amount), -pow(-x, amount))) AS power_distortion,
(x, amount) -> (round(x * exp2(amount)) / exp2(amount)) AS bitcrush,
(time, sample_frequency) -> (round(time * sample_frequency) / sample_frequency) AS desample,
(time, wave, amount) -> if((time - floor(time)) < (1 - amount), wave(time * (1 - amount)), 0) AS thin,
(time, wave, amount) -> wave(floor(time) + pow(time - floor(time), amount)) AS skew,
(a, b, weight) -> ((a * (1 - weight)) + (b * weight)) AS combine,
(time, offset, attack, hold, release) -> if(time < offset, 0, if(time < (offset + attack), (time - offset) / attack, if(time < ((offset + attack) + hold), 1, if(time < (((offset + attack) + hold) + release), ((((offset + attack) + hold) + release) - time) / release, 0)))) AS envelope,
(bpm, time, offset, attack, hold, release) -> envelope((time * (bpm / 60)) - floor(time * (bpm / 60)), offset, attack, hold, release) AS running_envelope,
(sequence, time) -> (sequence[1 + (CAST(time, 'UInt64') % length(sequence))]) AS sequencer,
(time, wave, delay, decay, count) -> arraySum(n -> (wave(time - (delay * n)) * pow(decay, n)), range(count)) AS delay,
delay(time, time -> (power_distortion(sine_wave((time * 80) + sine_wave(time * 2)), lfo(0.5, 1, sine_wave, time / 16)) * running_envelope(60, time, 0, 0., 0.01, 0.1)), 0.2, 0.5, 5) AS kick
SELECT (output((((kick + (delay(time, time -> power_distortion(sine_wave((time * 50) + (1 * sine_wave((time * 100) + (1 / 4)))) * running_envelope(60, time, 0, 0.01, 0.01, 0.1), lfo(1, 0.75, triangle_wave, time / 8)), 0.2, 0.5, 10) * lfo(0.5, 1, triangle_wave, time / 7))) + (delay(time, time -> power_distortion(sine_wave((time * sequencer([50, 100, 200, 400], time / 2)) + (1 * sine_wave((time * sequencer([50, 100, 200], time / 4)) + (1 / 4)))) * running_envelope(60, time, 0.5, 0.01, 0.01, 0.1), lfo(1, 0.75, triangle_wave, time / 8)), 0.2, 0.5, 10) * lfo(0.5, 1, triangle_wave, 16 + (time / 11)))) + (delay(time, time -> (white_noise(time) * running_envelope(60, time, 0.75, 0.01, 0.01, 0.1)), 0.2, 0.5, 10) * lfo(0.5, 1, triangle_wave, 24 + (time / 13)))) + (sine_wave((time * 100) + (1 * sine_wave((time * 10) + (1 / 4)))) * running_envelope(120, time, 0, 0.01, 0.01, 0.1))), output(((kick + (delay(time + 0.01, time -> power_distortion(sine_wave((time * 50) + (1 * sine_wave((time * 100) + (1 / 4)))) * running_envelope(60, time, 0, 0.01, 0.01, 0.1), lfo(1, 0.75, triangle_wave, time / 8)), 0.2, 0.5, 10) * lfo(0.5, 1, triangle_wave, time / 7))) + (delay(time - 0.01, time -> power_distortion(sine_wave((time * sequencer([50, 100, 200, 400], time / 2)) + (1 * sine_wave((time * sequencer([50, 100, 200], time / 4)) + (1 / 4)))) * running_envelope(60, time, 0.5, 0.01, 0.01, 0.1), lfo(1, 0.75, triangle_wave, time / 8)), 0.2, 0.5, 10) * lfo(0.5, 1, triangle_wave, 16 + (time / 11)))) + (delay(time + 0.005, time -> (white_noise(time) * running_envelope(60, time, 0.75, 0.01, 0.01, 0.1)), 0.2, 0.5, 10) * lfo(0.5, 1, triangle_wave, 24 + (time / 13)))))
FROM system.numbers
Query id: 84d88935-ec0e-4522-b1dc-906c26b0a56b
0 rows in set. Elapsed: 0.203 sec.
Received exception:
Code: 49. DB::Exception: Bad cast from type DB::ColumnNode to DB::IdentifierNode. (LOGICAL_ERROR)
```
See https://github.com/ClickHouse/NoiSQL | https://github.com/ClickHouse/ClickHouse/issues/48318 | https://github.com/ClickHouse/ClickHouse/pull/52947 | 109b0197d6fb9bbbe8384b5ab3bb0b3a0740e998 | 34b88721188618bf51d99cc5028873ba2018133c | "2023-04-01T21:43:26Z" | c++ | "2023-08-04T10:29:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,272 | ["tests/queries/0_stateless/02789_jit_cannot_convert_column.reference", "tests/queries/0_stateless/02789_jit_cannot_convert_column.sql", "tests/queries/0_stateless/02790_jit_wrong_result.reference", "tests/queries/0_stateless/02790_jit_wrong_result.sql"] | 01680_date_time_add_ubsan is flaky (due to a bug in JIT) | Seems everything was fine until yesterday:
```
SELECT
toStartOfDay(check_start_time) as t,
count() as runs,
100 * countIf(test_status != 'OK') / runs as failure_percentage
FROM checks
WHERE
test_name = '01680_date_time_add_ubsan'
AND pull_request_number = 0
AND check_start_time > today() - interval 30 day
GROUP BY t
ORDER by t
```
β | t | runs | failure_percentage
-- | -- | -- | --
1 | 2023-03-01 00:00:00 | 190 | 0
2 | 2023-03-02 00:00:00 | 198 | 0
3 | 2023-03-03 00:00:00 | 219 | 0
4 | 2023-03-04 00:00:00 | 61 | 0
5 | 2023-03-05 00:00:00 | 50 | 0
6 | 2023-03-06 00:00:00 | 82 | 0
7 | 2023-03-07 00:00:00 | 158 | 0
8 | 2023-03-08 00:00:00 | 265 | 0
9 | 2023-03-09 00:00:00 | 176 | 0
10 | 2023-03-10 00:00:00 | 150 | 0
11 | 2023-03-11 00:00:00 | 107 | 0
12 | 2023-03-12 00:00:00 | 66 | 0
13 | 2023-03-13 00:00:00 | 83 | 0
14 | 2023-03-14 00:00:00 | 166 | 0
15 | 2023-03-15 00:00:00 | 210 | 0
16 | 2023-03-16 00:00:00 | 133 | 0
17 | 2023-03-17 00:00:00 | 166 | 0
18 | 2023-03-18 00:00:00 | 190 | 0
19 | 2023-03-19 00:00:00 | 50 | 0
20 | 2023-03-20 00:00:00 | 202 | 0
21 | 2023-03-21 00:00:00 | 228 | 0
22 | 2023-03-22 00:00:00 | 237 | 0
23 | 2023-03-23 00:00:00 | 198 | 0
24 | 2023-03-24 00:00:00 | 211 | 0
25 | 2023-03-25 00:00:00 | 88 | 0
26 | 2023-03-26 00:00:00 | 79 | 0
27 | 2023-03-27 00:00:00 | 209 | 0
28 | 2023-03-28 00:00:00 | 245 | 0
29 | 2023-03-29 00:00:00 | 260 | 0
30 | 2023-03-30 00:00:00 | 339 | 0.2949852507374631
31 | 2023-03-31 00:00:00 | 120 | 19.166666666666668
Example: https://s3.amazonaws.com/clickhouse-test-reports/0/8994305fb3364098b2ce43fc5d92fb1196e6df43/stateless_tests__release__databaseordinary_.html
```
2023-03-31 05:47:04 The query succeeded but the server error '407' was expected (query: SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + (number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; -- { serverError 407 }).
2023-03-31 05:47:04
2023-03-31 05:47:04 stdout:
2023-03-31 05:47:04 \N
2023-03-31 05:47:04
2023-03-31 05:47:04 Settings used in the test: --max_insert_threads=11 --group_by_two_level_threshold=29673 --group_by_two_level_threshold_bytes=43004415 --distributed_aggregation_memory_efficient=0 --fsync_metadata=0 --output_format_parallel_formatting=1 --input_format_parallel_parsing=1 --min_chunk_bytes_for_parallel_parsing=7451526 --max_read_buffer_size=943333 --prefer_localhost_replica=0 --max_block_size=72709 --max_threads=45 --optimize_or_like_chain=0 --optimize_read_in_order=0 --read_in_order_two_level_merge_threshold=69 --optimize_aggregation_in_order=1 --aggregation_in_order_max_block_bytes=35738655 --min_compress_block_size=1648817 --max_compress_block_size=290228 --use_uncompressed_cache=0 --min_bytes_to_use_direct_io=10312023578 --min_bytes_to_use_mmap_io=139391117 --local_filesystem_read_method=pread --remote_filesystem_read_method=threadpool --local_filesystem_read_prefetch=0 --remote_filesystem_read_prefetch=0 --compile_expressions=1 --compile_aggregate_expressions=0 --compile_sort_description=1 --merge_tree_coarse_index_granularity=3 --optimize_distinct_in_order=0 --optimize_sorting_by_input_stream_properties=1 --http_response_buffer_size=4866157 --http_wait_end_of_query=False --enable_memory_bound_merging_of_aggregation_results=1 --min_count_to_compile_expression=0 --min_count_to_compile_aggregate_expression=0 --min_count_to_compile_sort_description=0
2023-03-31 05:47:04
2023-03-31 05:47:04 MergeTree settings used in test: --ratio_of_defaults_for_sparse_serialization=1.0 --prefer_fetch_merged_part_size_threshold=2987423327 --vertical_merge_algorithm_min_rows_to_activate=1 --vertical_merge_algorithm_min_columns_to_activate=63 --min_merge_bytes_to_use_direct_io=10737418240 --index_granularity_bytes=25281005 --merge_max_block_size=14399 --index_granularity=63978 --min_bytes_for_wide_part=1073741824
2023-03-31 05:47:04
2023-03-31 05:47:04 Database: test_zqfna9mk
```
| https://github.com/ClickHouse/ClickHouse/issues/48272 | https://github.com/ClickHouse/ClickHouse/pull/51113 | db82e94e68c48dd01a2e91be597cbedc7b56a188 | e28dc5d61c924992ddd0066e0e5e5bb05b848db3 | "2023-03-31T07:41:19Z" | c++ | "2023-12-08T02:27:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,260 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/dateDiff.cpp", "src/Functions/trim.cpp", "tests/queries/0_stateless/02710_date_diff_aliases.reference", "tests/queries/0_stateless/02710_date_diff_aliases.sql", "tests/queries/0_stateless/02711_trim_aliases.reference", "tests/queries/0_stateless/02711_trim_aliases.sql"] | Missing functions in system.functions | I was expecting to see every built-in function to be presented in `system.functions`.
However there are some functions like `date_add`, `timestamp_sub`, `timestamp_add` missing.
https://fiddle.clickhouse.com/a028f696-69fa-4192-842e-d5c9ba751a34
```sql
SELECT name
FROM system.functions
WHERE name ilike 'timestamp%';
-- empty resultset
```
| https://github.com/ClickHouse/ClickHouse/issues/48260 | https://github.com/ClickHouse/ClickHouse/pull/48489 | 17aecb797cb863ea02251b9f0daa4a4a12551de2 | dcfd843b2d029e00b8348e2fe20ec4b50ba451cc | "2023-03-30T20:59:46Z" | c++ | "2023-04-11T17:53:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,228 | ["src/AggregateFunctions/AggregateFunctionKolmogorovSmirnovTest.cpp", "src/AggregateFunctions/AggregateFunctionKolmogorovSmirnovTest.h", "src/AggregateFunctions/registerAggregateFunctions.cpp", "tests/queries/0_stateless/02706_kolmogorov_smirnov_test.reference", "tests/queries/0_stateless/02706_kolmogorov_smirnov_test.sql", "tests/queries/0_stateless/02706_kolmogorov_smirnov_test_scipy.python", "tests/queries/0_stateless/02706_kolmogorov_smirnov_test_scipy.reference", "tests/queries/0_stateless/02706_kolmogorov_smirnov_test_scipy.sh"] | Add statistical aggregate function `kolmogorovSmirnovTest` | **Use case**
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
This type of statistical test is useful to check the equality of two distributions. In addition it can be easily modified to check whether a sample comes from normal distribution by performing the test against a normally distributed sample. (Normal distribution is just an example, you can perform a test against sample from any distribution).
Previously there was an attempt https://github.com/ClickHouse/ClickHouse/pull/37873 to add a Shapiro-Wilk test for normality, but it has one very big disadvantage - it works only for relatively small samples (< 2500), which it not applicable for ClickHouse use-case.
**Describe the solution you'd like**
Take a look how `welchTTest` or `studentTTest` are implemented.
**Describe alternatives you've considered**
As an alternative `andersonDarling` test could be also implemented.
| https://github.com/ClickHouse/ClickHouse/issues/48228 | https://github.com/ClickHouse/ClickHouse/pull/48325 | db864891f88c17586ee5eb82ade693309a0371d0 | 6e8f77ee9cba381c5175b7b22ce0c6f6bb95d6b9 | "2023-03-30T14:39:03Z" | c++ | "2023-04-04T23:18:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,201 | ["docs/en/sql-reference/statements/system.md"] | Reload config / reload users | Is the documentation correct here?
https://clickhouse.com/docs/en/sql-reference/statements/system#reload-config

| https://github.com/ClickHouse/ClickHouse/issues/48201 | https://github.com/ClickHouse/ClickHouse/pull/48383 | a416e46f35e8f6fcfc3fb488b2ce3368f699e7b0 | 634ab620a7238ff3302cd9308c63dd63ea3e3aba | "2023-03-30T09:22:06Z" | c++ | "2023-04-04T13:38:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,173 | ["src/Common/MemoryTracker.cpp", "src/Interpreters/Aggregator.cpp", "src/Interpreters/Aggregator.h", "src/Processors/Transforms/AggregatingInOrderTransform.cpp", "tests/queries/0_stateless/02797_aggregator_huge_mem_usage_bug.reference", "tests/queries/0_stateless/02797_aggregator_huge_mem_usage_bug.sql"] | 02151_hash_table_sizes_stats triggers OOM | Examples:
https://s3.amazonaws.com/clickhouse-test-reports/0/5fa519d04311615433ad7e016d974de6f7af9860/stateless_tests__tsan__[3/3].html
https://s3.amazonaws.com/clickhouse-test-reports/0/278b8f74c2a54e832cb25c9cd629f6b1939dcfb6/stateless_tests__tsan__[3/3].html
See also https://github.com/ClickHouse/ClickHouse/pull/45181#issuecomment-1380411041
Seems like `02151_hash_table_sizes_stats_distributed` triggers OOM as well. But it was disabled in https://github.com/ClickHouse/ClickHouse/pull/45287, and then we forgot about it. We should try to enable it back. | https://github.com/ClickHouse/ClickHouse/issues/48173 | https://github.com/ClickHouse/ClickHouse/pull/51566 | 923ee958c7088b65d2b9af008c3a231f7abb80a3 | 2cbe79b529e84166055d2af469fde9c28d1757c0 | "2023-03-29T14:21:19Z" | c++ | "2023-08-02T13:11:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 48,056 | ["src/Parsers/ASTAlterQuery.cpp", "tests/queries/0_stateless/01318_alter_add_constraint_format.reference"] | clickhouse-format formatting issues (minor) | `clickhouse-format --oneline --query "ALTER TABLE alter_test COMMENT COLUMN IF EXISTS ToDrop 'new comment'"`
should output one line:
`ALTER TABLE alter_test COMMENT COLUMN IF EXISTS ToDrop 'new comment'`
but outputs two lines:
```
ALTER TABLE alter_test
COMMENT COLUMN IF EXISTS ToDrop 'new comment'
```
The issue is found when testing the #45649 refactoring.
I will be fixing the issue in a separate PR. | https://github.com/ClickHouse/ClickHouse/issues/48056 | https://github.com/ClickHouse/ClickHouse/pull/48289 | 3cb7d48b755e4b5707a68541bb473b51573dafca | b6adc258957fe66d7c9b28c8a791a8a65ad4b099 | "2023-03-27T14:02:23Z" | c++ | "2023-04-03T10:35:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,988 | ["tests/queries/0_stateless/02967_fuzz_bad_cast.reference", "tests/queries/0_stateless/02967_fuzz_bad_cast.sql"] | Fuzzer: AggregateFunctionSum cast to const DB::ColumnVector<DB::UInt8> failure |
```
2023.03.21 19:50:56.131471 [ 478 ] {} <Fatal> BaseDaemon: (version 23.3.1.1, build id: 407FF80ECCF6B9FAFD4D413B1FC42FF659C967F1) (from thread 459) (query_id: dafe564f-2858-4242-a7f7-505eb3174fcf) (query: SELECT sum(0), NULL FROM t0__fuzz_29 FULL OUTER JOIN t1__fuzz_4 USING (x) PREWHERE NULL) Received signal sanitizer trap (-3)
2023.03.21 19:50:56.131508 [ 478 ] {} <Fatal> BaseDaemon: Sanitizer trap.
2023.03.21 19:50:56.131562 [ 478 ] {} <Fatal> BaseDaemon: Stack trace: 0x22f95457 0x2328ebe8 0x185f0576 0x18604fcd 0x268b6d92 0x2e471d80 0x2e40bc41 0x30e07b9a 0x30e03206 0x30a215f4 0x30a10cfb 0x30a1235f 0x2307f97a 0x23083504 0x7f6f03e06609 0x7f6f03d2b133
2023.03.21 19:50:56.145542 [ 478 ] {} <Fatal> BaseDaemon: 0. ./build_docker/../src/Common/StackTrace.cpp:287: StackTrace::tryCapture() @ 0x22f95457 in /workspace/clickhouse
2023.03.21 19:50:56.170951 [ 478 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Common/StackTrace.h:0: sanitizerDeathCallback() @ 0x2328ebe8 in /workspace/clickhouse
2023.03.21 19:50:57.314965 [ 478 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0x185f0576 in /workspace/clickhouse
2023.03.21 19:50:58.442737 [ 478 ] {} <Fatal> BaseDaemon: 3. ? @ 0x18604fcd in /workspace/clickhouse
2023.03.21 19:50:58.498839 [ 478 ] {} <Fatal> BaseDaemon: 4.1. inlined from ./build_docker/../src/Common/assert_cast.h:0: DB::ColumnVector<char8_t> const& assert_cast<DB::ColumnVector<char8_t> const&, DB::IColumn const&>(DB::IColumn const&)
2023.03.21 19:50:58.498889 [ 478 ] {} <Fatal> BaseDaemon: 4. ./build_docker/../src/AggregateFunctions/AggregateFunctionSum.h:477: DB::AggregateFunctionSum<char8_t, unsigned long, DB::AggregateFunctionSumData<unsigned long>, (DB::AggregateFunctionSumType)0>::addBatchSinglePlace(unsigned long, unsigned long, char*, DB::IColumn const**, DB::Arena*, long) const @ 0x268b6d92 in /workspace/clickhouse
2023.03.21 19:50:58.797245 [ 478 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Interpreters/Aggregator.cpp:0: void DB::Aggregator::executeWithoutKeyImpl<false>(char*&, unsigned long, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, DB::Arena*) const @ 0x2e471d80 in /workspace/clickhouse
2023.03.21 19:50:59.091024 [ 478 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Interpreters/Aggregator.cpp:1574: DB::Aggregator::executeOnBlock(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*>>&, std::vector<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*>>, std::allocator<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*>>>>&, bool&) const @ 0x2e40bc41 in /workspace/clickhouse
2023.03.21 19:50:59.141752 [ 478 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Processors/Transforms/AggregatingTransform.cpp:0: DB::AggregatingTransform::consume(DB::Chunk) @ 0x30e07b9a in /workspace/clickhouse
2023.03.21 19:50:59.190980 [ 478 ] {} <Fatal> BaseDaemon: 8.1. inlined from ./build_docker/../src/Processors/Chunk.h:32: ~Chunk
2023.03.21 19:50:59.191016 [ 478 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Processors/Transforms/AggregatingTransform.cpp:627: DB::AggregatingTransform::work() @ 0x30e03206 in /workspace/clickhouse
2023.03.21 19:50:59.199173 [ 478 ] {} <Fatal> BaseDaemon: 9.1. inlined from ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:50: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*)
2023.03.21 19:50:59.199202 [ 478 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x30a215f4 in /workspace/clickhouse
2023.03.21 19:50:59.222601 [ 478 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:229: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x30a10cfb in /workspace/clickhouse
2023.03.21 19:50:59.248284 [ 478 ] {} <Fatal> BaseDaemon: 11.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:833: operator()
2023.03.21 19:50:59.248324 [ 478 ] {} <Fatal> BaseDaemon: 11.2. inlined from ./build_docker/../base/base/scope_guard.h:99: BasicScopeGuard<DB::PipelineExecutor::spawnThreads()::$_0::operator()() const::'lambda'()>::invoke()
2023.03.21 19:50:59.248359 [ 478 ] {} <Fatal> BaseDaemon: 11.3. inlined from ./build_docker/../base/base/scope_guard.h:48: ~BasicScopeGuard
2023.03.21 19:50:59.248406 [ 478 ] {} <Fatal> BaseDaemon: 11.4. inlined from ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:328: operator()
2023.03.21 19:50:59.248456 [ 478 ] {} <Fatal> BaseDaemon: 11.5. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&)
2023.03.21 19:50:59.248495 [ 478 ] {} <Fatal> BaseDaemon: 11.6. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&, std::__tuple_indices<>)
2023.03.21 19:50:59.248535 [ 478 ] {} <Fatal> BaseDaemon: 11.7. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&)
2023.03.21 19:50:59.248572 [ 478 ] {} <Fatal> BaseDaemon: 11.8. inlined from ./build_docker/../src/Common/ThreadPool.h:210: operator()
2023.03.21 19:50:59.248615 [ 478 ] {} <Fatal> BaseDaemon: 11.9. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&)
2023.03.21 19:50:59.248658 [ 478 ] {} <Fatal> BaseDaemon: 11.10. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:479: void std::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&)
2023.03.21 19:50:59.248703 [ 478 ] {} <Fatal> BaseDaemon: 11.11. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]()
2023.03.21 19:50:59.248741 [ 478 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x30a1235f in /workspace/clickhouse
2023.03.21 19:50:59.263502 [ 478 ] {} <Fatal> BaseDaemon: 12.1. inlined from ./build_docker/../base/base/strong_typedef.h:23: StrongTypedef<std::integral_constant<bool, true> >
2023.03.21 19:50:59.263528 [ 478 ] {} <Fatal> BaseDaemon: 12.2. inlined from ./build_docker/../src/Common/OpenTelemetryTraceContext.h:63: DB::OpenTelemetry::Span::isTraceEnabled() const
2023.03.21 19:50:59.263566 [ 478 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Common/ThreadPool.cpp:317: ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x2307f97a in /workspace/clickhouse
2023.03.21 19:50:59.281491 [ 478 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Common/ThreadPool.cpp:0: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x23083504 in /workspace/clickhouse
2023.03.21 19:50:59.281518 [ 478 ] {} <Fatal> BaseDaemon: 14. ? @ 0x7f6f03e06609 in ?
2023.03.21 19:50:59.281560 [ 478 ] {} <Fatal> BaseDaemon: 15. clone @ 0x7f6f03d2b133 in ?
2023.03.21 19:50:59.281599 [ 478 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
```
Fuzzer report: https://s3.amazonaws.com/clickhouse-test-reports/47727/49ffda181b3e57aff8d63f8636241b01159bd652/fuzzer_astfuzzerubsan/report.html | https://github.com/ClickHouse/ClickHouse/issues/47988 | https://github.com/ClickHouse/ClickHouse/pull/58893 | c7016339551a901244fdc5d6319799228d0a1df0 | 339fcdcf98c25cbe63c4690d8880a6a41a99984d | "2023-03-24T18:46:06Z" | c++ | "2024-01-17T15:21:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,952 | ["src/DataTypes/getLeastSupertype.cpp", "tests/queries/0_stateless/02713_ip4_uint_compare.reference", "tests/queries/0_stateless/02713_ip4_uint_compare.sql"] | using IPv4 field in WHERE clause gives "no supertype for types UInt32, IPv4" | (you don't have to strictly follow this form)
**Describe the issue**
In 19.5.3.8, we could do this:
WHERE ip = IPv4StringToNum('1.2.3.4')
for ip of type IPv4 or Nullable(IPv4)
In 22.12.5.34, it also works.
In 23.2.4.12, we get instead a DB::Exception
**How to reproduce**
* Which ClickHouse server versions are incompatible
regression seems to be in 23.2.4.12 at least. 22.12.5.34 is "ok".
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
```
CREATE TABLE test (
`dateseen` DateTime,
`ip` IPv4
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(`dateseen`)
ORDER BY (`ip`)
```
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
No data needed. It appears in the query parsing/planning stage.
* Queries to run that lead to unexpected result
```
SELECT *
FROM test
WHERE ip = IPv4StringToNum('1.2.3.4')
```
**Error message and/or stacktrace**
client side:
```
Received exception from server (version 23.2.4):
Code: 386. DB::Exception: Received from XXXXX:19000. DB::Exception: There is no supertype for types UInt32, IPv4 because some of them are numbers and some of them are not: while executing 'FUNCTION equals(ip : 1, IPv4StringToNum('1.2.3.4') : 3) -> equals(ip, IPv4StringToNum('1.2.3.4')) UInt8 : 4'.
```
server side (clickhouse-server.err.log):
```
2023.03.23 11:18:32.528497 [ 48 ] {b5b50ffa-95b0-404d-bbab-3539290f4715} <Error> TCPHandler: Code: 386. DB::Exception: There is no supertype for types UInt32, IPv4 because some of them are numbers and some of them are not: while executing 'FUNCTION equals(ip : 1, IPv4StringToNum('1.2.3.4') : 3) -> equals(ip, IPv4StringToNum('1.2.3.4')) UInt8 : 4'. (NO_COMMON_TYPE), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0c86b5 in /usr/bin/clickhouse
1. ? @ 0xce30bad in /usr/bin/clickhouse
2. ? @ 0x12949f07 in /usr/bin/clickhouse
3. ? @ 0x12948c0c in /usr/bin/clickhouse
4. std::__1::shared_ptr<DB::IDataType const> DB::getLeastSupertype<(DB::LeastSupertypeOnError)0>(std::__1::vector<std::__1::shared_ptr<DB::IDataType const>, std::__1::allocator<std::__1::shared_ptr<DB::IDataType const>>> const&) @ 0x12947b0c in /usr/bin/clickhouse
5. ? @ 0xa311937 in /usr/bin/clickhouse
6. ? @ 0xa2f8ea4 in /usr/bin/clickhouse
7. ? @ 0x893fc0e in /usr/bin/clickhouse
8. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x123b866b in /usr/bin/clickhouse
9. DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x123b90ec in /usr/bin/clickhouse
10. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x123ba35b in /usr/bin/clickhouse
11. DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x12cb9cfb in /usr/bin/clickhouse
12. DB::ExpressionActions::execute(DB::Block&, bool) const @ 0x12cbade6 in /usr/bin/clickhouse
13. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x12ce1f34 in /usr/bin/clickhouse
14. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x135d6f71 in /usr/bin/clickhouse
15. ? @ 0x135cfcba in /usr/bin/clickhouse
16. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&,
DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135c9fb9 in /usr/bin/clickhouse
17. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135c73d4 in /usr/bin/clickhouse
18. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const> const&, DB::SelectQueryOptions const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135cc2ac in /usr/bin/clickhouse
19. DB::MergeTreeData::getQueryProcessingStageWithAggregateProjection(std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&) const @ 0x142aba2e in /usr/bin/clickhouse
20. DB::MergeTreeData::getQueryProcessingStage(std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&) const @ 0x142b0083 in /usr/bin/clickhouse
21. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x135d6ee9 in /usr/bin/clickhouse
22. ? @ 0x135cfcba in /usr/bin/clickhouse
23. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&,
DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135c9fb9 in /usr/bin/clickhouse
24. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x1365bd02 in /usr/bin/clickhouse
25. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x13659caa in /usr/bin/clickhouse
26. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x13582f50 in /usr/bin/clickhouse
27. ? @ 0x1397f260 in /usr/bin/clickhouse
28. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x1397c9ad in /usr/bin/clickhouse
29. DB::TCPHandler::runImpl() @ 0x14703d39 in /usr/bin/clickhouse
30. DB::TCPHandler::run() @ 0x14719259 in /usr/bin/clickhouse
31. Poco::Net::TCPServerConnection::start() @ 0x1761d1b4 in /usr/bin/clickhouse
```
**Additional context**
Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/47952 | https://github.com/ClickHouse/ClickHouse/pull/48611 | 2e612c364462b10efc4a8c3fc636a58bc30b54f8 | 6232d17876fa266493373924d3bd6a4cdc3a95ee | "2023-03-23T18:29:57Z" | c++ | "2023-04-11T13:53:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,947 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/performance/set_disable_skip_index.xml", "tests/queries/0_stateless/02707_skip_index_with_in.reference", "tests/queries/0_stateless/02707_skip_index_with_in.sql"] | ClickHouse spends time in makeSetsForIndex for skip index with use_skip_indexes=0 | **Describe the situation**
When you have a skip index defined on a table and are using an `IN (...)` construction, ClickHouse will always spend time in `makeSetsForIndex` regardless of the value of `use_skip_indexes`.
For some of our queries, evaluating the skip index takes more time than just reading all the rows so we set `use_skip_indexes=0` for these queries. However, we still see significant performance regression on these queries from simply having a skip index defined on the table because of the time spent in `makeSetsForIndex`.
Setting `use_skip_indexes=0` and `use_index_for_in_with_subqueries=0` resolves the performance regression
**Expected performance**
I would expect us not to do any work related to skip indexes if `use_skip_indexes` is set to 0
| https://github.com/ClickHouse/ClickHouse/issues/47947 | https://github.com/ClickHouse/ClickHouse/pull/48299 | 5495cb2935c3f4d9fc89e4fb78bda5c51fb07c8c | 3302f1ff909a798065e1a04151f260b62359fcea | "2023-03-23T16:32:04Z" | c++ | "2023-04-08T19:25:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,923 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md"] | Missing/unclear docs about multi-expression secondary indexes | The docs on [data skipping indices](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-data_skipping-indexes) state that they are build over an expression
```sql
INDEX index_name expr TYPE type(...) [GRANULARITY granularity_value]
```
"Expression" is used in the docs in singular, i.e. the index is build over a *single* expression.
But then the example
```sql
CREATE TABLE table_name
(
u64 UInt64,
i32 Int32,
s String,
...
INDEX a (u64 * i32, s) TYPE minmax GRANULARITY 3,
INDEX b (u64 * length(s)) TYPE set(1000) GRANULARITY 4
) ENGINE = MergeTree()
...
```
builds index `a` over *two* expressions. What happens in case of > 1 expression is not intuitively clear. E.g. does the index (in this case a `min-max` index) store one minimum/maximum pair per expression? (if so, why would one not have two indices instead?). Are both expressions somehow combined (that would be weird but there are other databases which do that kind of stuff, usually based on string concatenation)?
The best option would be if the docs describe + given an example for single-expression skipping indices first, and then threat multi-expression indexes as a special "extension", along with a separate example. | https://github.com/ClickHouse/ClickHouse/issues/47923 | https://github.com/ClickHouse/ClickHouse/pull/47961 | 76401dcd83dbf7755bf71e1441cbae9992c0cad6 | 7ecfd664d0478096e3f7eefec6e0b8ec6c20716c | "2023-03-23T10:16:56Z" | c++ | "2023-03-23T22:43:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,891 | ["src/Interpreters/Aggregator.cpp", "tests/queries/0_stateless/02770_jit_aggregation_nullable_key_fix.reference", "tests/queries/0_stateless/02770_jit_aggregation_nullable_key_fix.sql"] | Logical error: Invalid number of rows in Chunk | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/0/4dc5a629c349dd88cd7728621223a62e9ed87a4a/fuzzer_astfuzzerdebug/report.html\r\n\r\n```\r\ndell9510 :) set group_by_use_nulls=1\r\n\r\nSET group_by_use_nulls = 1\r\n\r\nQuery id: 2069b36a-fe74-48ff-a0ed-934b6a6dd57e\r\n\r\nOk.\r\n\r\n0 rows in set. Elapsed: 0.003 sec. \r\n\r\ndell9510 :) SELECT count([NULL, NULL]), count([2147483646, -2147483647, 3, 3]), uniqExact(if(number >= 1048577, number, NULL), NULL) FROM numbers(1048577) GROUP BY if(number >= 2., number, NULL) format Null\r\n\r\nSELECT\r\n count([NULL, NULL]),\r\n count([2147483646, -2147483647, 3, 3]),\r\n uniqExact(if(number >= 1048577, number, NULL), NULL)\r\nFROM numbers(1048577)\r\nGROUP BY if(number >= 2., number, NULL)\r\nFORMAT `Null`\r\n\r\nQuery id: 749df7d1-2e9a-4ea8-a92b-295464433591\r\n\r\n[dell9510] 2023.03.22 14:23:52.856923 [ 246980 ] {749df7d1-2e9a-4ea8-a92b-295464433591} <Fatal> : Logical error: 'Invalid number of rows in Chunk column UInt64 position 1: expected 65409, got 65410'.\r\n[dell9510] 2023.03.22 14:23:52.858093 [ 246986 ] <Fatal> BaseDaemon: ########################################\r\n[dell9510] 2023.03.22 14:23:52.858298 [ 246986 ] <Fatal> BaseDaemon: (version 23.3.1.2537, build id: 3018790D95A6AA08FB81899667D886CA8AA05445) (from thread 246980) (query_id: 749df7d1-2e9a-4ea8-a92b-295464433591) (query: SELECT count([NULL, NULL]), count([2147483646, -2147483647, 3, 3]), uniqExact(if(number >= 1048577, number, NULL), NULL) FROM numbers(1048577) GROUP BY if(number >= 2., number, NULL) format Null) Received signal Aborted (6)\r\n[dell9510] 2023.03.22 14:23:52.858458 [ 246986 ] <Fatal> BaseDaemon: \r\n[dell9510] 2023.03.22 14:23:52.858626 [ 246986 ] <Fatal> BaseDaemon: Stack trace: 0x7efdba5928ec 0x7efdba543ea8 0x7efdba52d53d 0x22f756c3 0x22f75735 0x22f75b6c 0x19934d57 0x1a239a6e 0x2dda5f88 0x2dda5d5f 0x2e23e081 0x2e24d8b7 0x2e2498af 0x2ddfe250 0x2ddfdf40 0x2ddd967e 0x2ddd9a04 0x2dddb267 0x2dddb1b5 0x2dddb199 0x2dddb0fd 0x2dddaff0 0x2dddaf35 0x2dddaf15 0x2dddaef5 0x2dddaec0 0x22fcdcd6 0x22fcd115 0x230d23dc 0x230d9bd1 0x230d9b75 0x230d9a9d 0x230d954f 0x7efdba590bb5 0x7efdba612d90\r\n[dell9510] 2023.03.22 14:23:52.858820 [ 246986 ] <Fatal> BaseDaemon: 4. ? @ 0x7efdba5928ec in ?\r\n[dell9510] 2023.03.22 14:23:52.858944 [ 246986 ] <Fatal> BaseDaemon: 5. gsignal @ 0x7efdba543ea8 in ?\r\n[dell9510] 2023.03.22 14:23:52.859086 [ 246986 ] <Fatal> BaseDaemon: 6. abort @ 0x7efdba52d53d in ?\r\n[dell9510] 2023.03.22 14:23:52.925055 [ 246986 ] <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:41: DB::abortOnFailedAssertion(String const&) @ 0x22f756c3 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:52.989157 [ 246986 ] <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:64: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x22f75735 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.047296 [ 246986 ] <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:92: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x22f75b6c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.113924 [ 246986 ] <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:55: DB::Exception::Exception(String&&, int, bool) @ 0x19934d57 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.259355 [ 246986 ] <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:82: DB::Exception::Exception<String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&) @ 0x1a239a6e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.308614 [ 246986 ] <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:73: DB::Chunk::checkNumRowsIsConsistent() @ 0x2dda5f88 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.344894 [ 246986 ] <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:17: DB::Chunk::Chunk(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long) @ 0x2dda5d5f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.574894 [ 246986 ] <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/AggregatingTransform.cpp:33: DB::convertToChunk(DB::Block const&) @ 0x2e23e081 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.747205 [ 246986 ] <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/AggregatingTransform.cpp:487: DB::ConvertingAggregatedToChunksTransform::mergeSingleLevel() @ 0x2e24d8b7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.917161 [ 246986 ] <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/AggregatingTransform.cpp:307: DB::ConvertingAggregatedToChunksTransform::work() @ 0x2e2498af in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.946018 [ 246986 ] <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x2ddfe250 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:53.969305 [ 246986 ] <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x2ddfdf40 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.056526 [ 246986 ] <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:229: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x2ddd967e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.131419 [ 246986 ] <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:195: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x2ddd9a04 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.200003 [ 246986 ] <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:320: DB::PipelineExecutor::spawnThreads()::$_0::operator()() const @ 0x2dddb267 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.288523 [ 246986 ] <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) @ 0x2dddb1b5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.378734 [ 246986 ] <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&, std::__tuple_indices<>) @ 0x2dddb199 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.473525 [ 246986 ] <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::tuple<>&) @ 0x2dddb0fd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.547306 [ 246986 ] <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/src/Common/ThreadPool.h:210: ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()::operator()() @ 0x2dddaff0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.654999 [ 246986 ] <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) @ 0x2dddaf35 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.754550 [ 246986 ] <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&) @ 0x2dddaf15 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.851812 [ 246986 ] <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]() @ 0x2dddaef5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.948638 [ 246986 ] <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x2dddaec0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:54.992659 [ 246986 ] <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x22fcdcd6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.035135 [ 246986 ] <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x22fcd115 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.088978 [ 246986 ] <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/src/Common/ThreadPool.cpp:315: ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x230d23dc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.152437 [ 246986 ] <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/src/Common/ThreadPool.cpp:145: void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()::operator()() const @ 0x230d9bd1 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.217272 [ 246986 ] <Fatal> BaseDaemon: 34. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<void>()()) std::__invoke[abi:v15000]<void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>(void&&) @ 0x230d9b75 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.280495 [ 246986 ] <Fatal> BaseDaemon: 35. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/thread:285: void std::__thread_execute[abi:v15000]<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>(std::tuple<void, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>&, std::__tuple_indices<>) @ 0x230d9a9d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.344584 [ 246986 ] <Fatal> BaseDaemon: 36. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/thread:295: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x230d954f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.22 14:23:55.344808 [ 246986 ] <Fatal> BaseDaemon: 37. ? @ 0x7efdba590bb5 in ?\r\n[dell9510] 2023.03.22 14:23:55.344941 [ 246986 ] <Fatal> BaseDaemon: 38. ? @ 0x7efdba612d90 in ?\r\n[dell9510] 2023.03.22 14:23:55.345093 [ 246986 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.\r\nβ Progress: 1.11 million rows, 8.90 MB (203.62 thousand rows/s., 1.63 MB/s.) 99%Exception on client:\r\nCode: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF)\r\n\r\n```\r\n" | https://github.com/ClickHouse/ClickHouse/issues/47891 | https://github.com/ClickHouse/ClickHouse/pull/50291 | e1d535c890279ac5de4cc5bf44c38b223505c6ee | 9647cfa33db8295a9e538fc1f492b9fb13627675 | "2023-03-22T13:25:31Z" | c++ | "2023-05-28T23:08:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,865 | ["tests/queries/0_stateless/02794_pushdown_invalid_get.reference", "tests/queries/0_stateless/02794_pushdown_invalid_get.sql"] | Invalid Field get from type Int64 to type Int128 in `filterPushDown`/`cloneActionsForFilterPushDown` | https://s3.amazonaws.com/clickhouse-test-reports/0/3c550b4314c0e9c8834952f8ff0ce7fe4569284d/fuzzer_astfuzzerdebug/report.html
```
dell9510 :) set joined_subquery_requires_alias=0
SET joined_subquery_requires_alias = 0
Query id: f07ad367-a11e-447b-8ee9-ff8821b42941
Ok.
0 rows in set. Elapsed: 0.050 sec.
dell9510 :) SELECT '104857.7', k FROM (SELECT toInt128(NULL) AS k FROM system.one GROUP BY toUInt256([toInt256(1048575, toInt128(-9223372036854775807), NULL)], NULL)) INNER JOIN (SELECT toInt128(-2) AS k) AS t USING (k) WHERE k
SELECT
'104857.7',
k
FROM
(
SELECT toInt128(NULL) AS k
FROM system.one
GROUP BY toUInt256([toInt256(1048575, toInt128(-9223372036854775807), NULL)], NULL)
)
INNER JOIN
(
SELECT toInt128(-2) AS k
) AS t USING (k)
WHERE k
Query id: c77b40bf-ff16-47a3-a678-ae4f19bccc09
[dell9510] 2023.03.21 20:34:50.677398 [ 192935 ] {c77b40bf-ff16-47a3-a678-ae4f19bccc09} <Fatal> : Logical error: 'Invalid Field get from type Int64 to type Int128'.
[dell9510] 2023.03.21 20:34:50.858032 [ 217792 ] <Fatal> BaseDaemon: ########################################
[dell9510] 2023.03.21 20:34:50.899801 [ 217792 ] <Fatal> BaseDaemon: (version 23.3.1.2537, build id: 3018790D95A6AA08FB81899667D886CA8AA05445) (from thread 192935) (query_id: c77b40bf-ff16-47a3-a678-ae4f19bccc09) (query: SELECT '104857.7', k FROM (SELECT toInt128(NULL) AS k FROM system.one GROUP BY toUInt256([toInt256(1048575, toInt128(-9223372036854775807), NULL)], NULL)) INNER JOIN (SELECT toInt128(-2) AS k) AS t USING (k) WHERE k) Received signal Aborted (6)
[dell9510] 2023.03.21 20:34:50.954840 [ 217792 ] <Fatal> BaseDaemon:
[dell9510] 2023.03.21 20:34:51.119385 [ 217792 ] <Fatal> BaseDaemon: Stack trace: 0x7f8b9164f8ec 0x7f8b91600ea8 0x7f8b915ea53d 0x22f756c3 0x22f75735 0x22f75b6c 0x19934d57 0x1999588f 0x1a1245d8 0x1a1941dd 0x2cad4b32 0x2ca0892a 0x2ac94de1 0x2afc7b28 0x2e631aad 0x2e63100f 0x2e62fd5f 0x2e5f925b 0x2e542ae8 0x2e5425b3 0x2c223d2b 0x2c759fb7 0x2c755d3b 0x2dd4a44d 0x2dd5d2f2 0x331eee39 0x331ef688 0x33471461 0x3346dd7a 0x3346c915 0x7f8b9164dbb5 0x7f8b916cfd90
[dell9510] 2023.03.21 20:34:51.125702 [ 217792 ] <Fatal> BaseDaemon: 4. ? @ 0x7f8b9164f8ec in ?
[dell9510] 2023.03.21 20:34:51.126362 [ 217792 ] <Fatal> BaseDaemon: 5. raise @ 0x7f8b91600ea8 in ?
[dell9510] 2023.03.21 20:34:51.127734 [ 217792 ] <Fatal> BaseDaemon: 6. abort @ 0x7f8b915ea53d in ?
[dell9510] 2023.03.21 20:34:51.897116 [ 217792 ] <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:41: DB::abortOnFailedAssertion(String const&) @ 0x22f756c3 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:34:52.497601 [ 217792 ] <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:64: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x22f75735 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:34:53.452007 [ 217792 ] <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:92: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x22f75b6c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:34:54.308080 [ 217792 ] <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:55: DB::Exception::Exception(String&&, int, bool) @ 0x19934d57 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:34:58.766484 [ 217792 ] <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:82: DB::Exception::Exception<DB::Field::Types::Which&, DB::Field::Types::Which const&>(int, FormatStringHelperImpl<std::type_identity<DB::Field::Types::Which&>::type, std::type_identity<DB::Field::Types::Which const&>::type>, DB::Field::Types::Which&, DB::Field::Types::Which const&) @ 0x1999588f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:34:59.092397 [ 217792 ] <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Core/Field.h:876: DB::NearestFieldTypeImpl<std::decay<wide::integer<128ul, int>>::type, void>::Type& DB::Field::get<wide::integer<128ul, int>>() @ 0x1a1245d8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:00.019165 [ 217792 ] <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/Core/Field.h:462: auto const& DB::Field::get<wide::integer<128ul, int>>() const @ 0x1a1941dd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:02.508428 [ 217792 ] <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Columns/ColumnVector.h:308: DB::ColumnVector<wide::integer<128ul, int>>::insert(DB::Field const&) @ 0x2cad4b32 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:03.006488 [ 217792 ] <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Columns/ColumnNullable.cpp:196: DB::ColumnNullable::insert(DB::Field const&) @ 0x2ca0892a in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:03.404607 [ 217792 ] <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/DataTypes/IDataType.cpp:59: DB::IDataType::createColumnConst(unsigned long, DB::Field const&) const @ 0x2ac94de1 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:08.350035 [ 217792 ] <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Interpreters/ActionsDAG.cpp:2001: DB::ActionsDAG::cloneActionsForFilterPushDown(String const&, bool, std::vector<String, std::allocator<String>> const&, std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&) @ 0x2afc7b28 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:09.401700 [ 217792 ] <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp:118: DB::QueryPlanOptimizations::splitFilter(DB::QueryPlan::Node*, std::vector<String, std::allocator<String>> const&, unsigned long) @ 0x2e631aad in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:10.273270 [ 217792 ] <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp:352: DB::QueryPlanOptimizations::tryPushDownFilter(DB::QueryPlan::Node*, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&)::$_1::operator()(DB::JoinKind) const @ 0x2e63100f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:11.168416 [ 217792 ] <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp:374: DB::QueryPlanOptimizations::tryPushDownFilter(DB::QueryPlan::Node*, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&) @ 0x2e62fd5f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:11.740796 [ 217792 ] <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp:85: DB::QueryPlanOptimizations::optimizeTreeFirstPass(DB::QueryPlanOptimizationSettings const&, DB::QueryPlan::Node&, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&) @ 0x2e5f925b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:12.788982 [ 217792 ] <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/QueryPlan.cpp:463: DB::QueryPlan::optimize(DB::QueryPlanOptimizationSettings const&) @ 0x2e542ae8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:13.729484 [ 217792 ] <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/QueryPlan.cpp:167: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x2e5425b3 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:15.174501 [ 217792 ] <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/src/Interpreters/InterpreterSelectWithUnionQuery.cpp:388: DB::InterpreterSelectWithUnionQuery::execute() @ 0x2c223d2b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:16.383538 [ 217792 ] <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/src/Interpreters/executeQuery.cpp:713: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x2c759fb7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:17.818856 [ 217792 ] <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/src/Interpreters/executeQuery.cpp:1160: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x2c755d3b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:19.229144 [ 217792 ] <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/src/Server/TCPHandler.cpp:420: DB::TCPHandler::runImpl() @ 0x2dd4a44d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:20.732580 [ 217792 ] <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/src/Server/TCPHandler.cpp:2011: DB::TCPHandler::run() @ 0x2dd5d2f2 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:21.084879 [ 217792 ] <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/base/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x331eee39 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:21.312826 [ 217792 ] <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x331ef688 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:21.485715 [ 217792 ] <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/base/poco/Foundation/src/ThreadPool.cpp:188: Poco::PooledThread::run() @ 0x33471461 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:21.767334 [ 217792 ] <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x3346dd7a in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:21.999131 [ 217792 ] <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/base/poco/Foundation/src/Thread_POSIX.cpp:335: Poco::ThreadImpl::runnableEntry(void*) @ 0x3346c915 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.03.21 20:35:22.007930 [ 217792 ] <Fatal> BaseDaemon: 34. ? @ 0x7f8b9164dbb5 in ?
[dell9510] 2023.03.21 20:35:22.009741 [ 217792 ] <Fatal> BaseDaemon: 35. ? @ 0x7f8b916cfd90 in ?
[dell9510] 2023.03.21 20:35:22.010816 [ 217792 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF)
```
cc: @KochetovNicolai | https://github.com/ClickHouse/ClickHouse/issues/47865 | https://github.com/ClickHouse/ClickHouse/pull/51306 | 4c8a4f54ce9a1ce58ea2a6f4c7b5f0ea804ac110 | 1ebfaafec8f9abebbc76e73f783287d3cf1240d5 | "2023-03-21T19:40:25Z" | c++ | "2023-06-23T11:45:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,792 | ["docs/en/sql-reference/statements/alter/partition.md"] | Error during alter table move partition to table | Version 22.12.5.35
According to [documentation](https://clickhouse.com/docs/en/sql-reference/statements/alter/partition#move-partition-to-table) requarements are:
1. Both tables must have the same structure. β
2. Both tables must have the same partition key, the same order by key and the same primary key. β
3. Both tables must have the same storage policy **(a disk where the partition is stored should be available for both tables).** β
4. Both tables must be the same engine family (replicated or non-replicated).. β
How to reproduce:
1. Create docker-compose.yml
```
version: '2.2'
services:
clickhouse:
restart: always
image: clickhouse/clickhouse-server:22.12.5.34
container_name: clickhouse
volumes:
- ./external_store:/mnt/data
- ./data:/var/lib/clickhouse
- ./logs:/var/log/clickhouse-server
- ./etc/config.d:/etc/clickhouse-server/config.d
- ./etc/users.d:/etc/clickhouse-server/users.d
ports:
- "8123:8123"
- "9000:9000"
cap_add:
- SYS_NICE
- NET_ADMIN
- IPC_LOCK
- SYS_PTRACE
```
2. Run it: `docker-compose up -d`
3. Put **admin.xml** file to ./etc/users.d/
```
<clickhouse>
<users>
<default>
<password>default</password>
<profile>default</profile>
<quota>default</quota>
<default_database>default</default_database>
</default>
</users>
</clickhouse>
```
4. Put **listen_hosts.xml** and **storage_configuration.xml** to ./etc/config.d
```
<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>
```
```
<clickhouse>
<storage_configuration>
<disks>
<default></default>
<external>
<path>/mnt/data/</path>
</external>
</disks>
<policies>
<policy_1>
<volumes>
<volume_1>
<disk>default</disk>
</volume_1>
<volume_2>
<disk>external</disk>
</volume_2>
</volumes>
<move_factor>0</move_factor>
</policy_1>
<policy_2>
<volumes>
<volume_2>
<disk>external</disk>
</volume_2>
<volume_1>
<disk>default</disk>
</volume_1>
</volumes>
<move_factor>0</move_factor>
</policy_2>
</policies>
</storage_configuration>
</clickhouse>
```
5. Restart server: `docker restart clickhouse`
6. As test-data, we use data from the dataset [example dataset](https://clickhouse.com/docs/en/getting-started/example-datasets/opensky)
`>> wget -O- https://zenodo.org/record/5092942 | grep -oP 'https://zenodo.org/record/5092942/files/flightlist_\d+_\d+\.csv\.gz' | xargs wget`
Put data into mounted volume:
```
>> sudo mkdir ./external_store/dump
>> sudo mv flightlist_20* ./external_store/dump/
>> docker exec -it -u 0 clickhouse chmod 777 -R /mnt/data/dump
```
7. Create tables:
`docker exec -it -u 0 clickhouse clickhouse-client --password default`
```
CREATE TABLE default.opensky_policy1
(
callsign String,
number String,
icao24 String,
registration String,
typecode String,
origin String,
destination String,
firstseen DateTime,
lastseen DateTime,
day DateTime,
latitude_1 Float64,
longitude_1 Float64,
altitude_1 Float64,
latitude_2 Float64,
longitude_2 Float64,
altitude_2 Float64
) ENGINE = MergeTree
partition by toYear(day)
ORDER BY (origin, destination, callsign) SETTINGS storage_policy = 'policy_1'
```
```
CREATE TABLE default.opensky_policy2
(
callsign String,
number String,
icao24 String,
registration String,
typecode String,
origin String,
destination String,
firstseen DateTime,
lastseen DateTime,
day DateTime,
latitude_1 Float64,
longitude_1 Float64,
altitude_1 Float64,
latitude_2 Float64,
longitude_2 Float64,
altitude_2 Float64
) ENGINE = MergeTree
partition by toYear(day)
ORDER BY (origin, destination, callsign) SETTINGS storage_policy = 'policy_2
```
8. Insert data:
```
>> docker exec -it -u 0 clickhouse bash
>> cd /mnt/data/dump
>> ls -1 flightlist_*.csv.gz | xargs -P4 -I{} bash -c 'gzip -c -d "{}" | clickhouse-client --password default --date_time_input_format best_effort --query "INSERT INTO default.opensky_policy2 FORMAT CSVWithNames"'
```
9. Execute alter table:
```
>> docker exec -it -u 0 clickhouse clickhouse-client --password default
>> ALTER TABLE opensky_policy1 MOVE PARTITION '2019' TO TABLE opensky_policy2;
```
The result will be an error
```
Received exception from server (version 22.12.5):
Code: 478. DB::Exception: Received from localhost:9000. DB::Exception: Destination table default.opensky_policy2 (38b382a2-0bc6-495a-b4f2-ca1ba90cf57e) should have the same storage policy of source table default.opensky_policy1 (b4937797-5125-4316-907d-9a1ee15f40b8). default.opensky_policy1 (b4937797-5125-4316-907d-9a1ee15f40b8): policy_1, default.opensky_policy2 (38b382a2-0bc6-495a-b4f2-ca1ba90cf57e): policy_2. (UNKNOWN_POLICY)
```
| https://github.com/ClickHouse/ClickHouse/issues/47792 | https://github.com/ClickHouse/ClickHouse/pull/49174 | 75f9a7f087eceecc05e04214ffc2616f7ce91aa0 | ddfbc61e99d25aff2ef67ec612f9105cf8b75345 | "2023-03-20T18:06:46Z" | c++ | "2023-04-26T13:25:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,787 | ["tests/queries/0_stateless/02789_jit_cannot_convert_column.reference", "tests/queries/0_stateless/02789_jit_cannot_convert_column.sql", "tests/queries/0_stateless/02790_jit_wrong_result.reference", "tests/queries/0_stateless/02790_jit_wrong_result.sql"] | JIT compilation changes query result (both are correct) | ```
CREATE TABLE t2 (c0 Int32, c1 Int32, c2 String) ENGINE = Log() ;
INSERT INTO t2(c1, c0) VALUES (1697596429, 1259570390);
INSERT INTO t2(c1, c2) VALUES (-871444251, 's,');
INSERT INTO t2(c0, c2, c1) VALUES (-943999939, '', 1756486294);
```
Just changing `compile_expressions = 0` in a query changes the result.
```
SELECT MIN(t2.c0)
FROM t2
GROUP BY log(-(t2.c0 / (t2.c0 - t2.c0)))
HAVING NOT (NOT (-(NOT MIN(t2.c0))))
UNION ALL
SELECT MIN(t2.c0)
FROM t2
GROUP BY log(-(t2.c0 / (t2.c0 - t2.c0)))
HAVING NOT (NOT (NOT (-(NOT MIN(t2.c0)))))
UNION ALL
SELECT MIN(t2.c0)
FROM t2
GROUP BY log(-(t2.c0 / (t2.c0 - t2.c0)))
HAVING (NOT (NOT (-(NOT MIN(t2.c0))))) IS NULL
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0, compile_expressions = 1
Query id: 0ff319aa-38b2-4553-87b1-05a533db8e4d
ββMIN(c0)ββ
β 0 β
βββββββββββ
1 row in set. Elapsed: 0.004 sec.
SELECT MIN(t2.c0)
FROM t2
GROUP BY log(-(t2.c0 / (t2.c0 - t2.c0)))
HAVING NOT (NOT (-(NOT MIN(t2.c0))))
UNION ALL
SELECT MIN(t2.c0)
FROM t2
GROUP BY log(-(t2.c0 / (t2.c0 - t2.c0)))
HAVING NOT (NOT (NOT (-(NOT MIN(t2.c0)))))
UNION ALL
SELECT MIN(t2.c0)
FROM t2
GROUP BY log(-(t2.c0 / (t2.c0 - t2.c0)))
HAVING (NOT (NOT (-(NOT MIN(t2.c0))))) IS NULL
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0, compile_expressions = 0
Query id: 7ee14624-efb5-44b7-ae09-429a180a2816
βββββMIN(c0)ββ
β 1259570390 β
ββββββββββββββ
ββMIN(c0)ββ
β 0 β
βββββββββββ
2 rows in set. Elapsed: 0.004 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/47787 | https://github.com/ClickHouse/ClickHouse/pull/51113 | db82e94e68c48dd01a2e91be597cbedc7b56a188 | e28dc5d61c924992ddd0066e0e5e5bb05b848db3 | "2023-03-20T16:18:56Z" | c++ | "2023-12-08T02:27:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,756 | ["src/IO/ZlibInflatingReadBuffer.cpp", "src/IO/ZlibInflatingReadBuffer.h"] | insert from INFILE large than 4G causes "inflateReset failed: buffer error" | I recently found that when insert from **large gzip file (>4G)**, clickhouse will throw error like this:
`Code: 354. DB::Exception: inflateReset failed: buffer error: While executing ParallelParsingBlockInputFormat: While executing File: data for INSERT was parsed from file: (in query: insert into RAW from INFILE '20190805.csv.gz' FORMAT CSVWithNames). (ZLIB_INFLATE_FAILED)`
This happens as long as the gzip file is larger than 4GB.
Here is my command, I run the sql from ch client:
`clickhouse-client --host="127.0.0.1" --port="9000" --user="" --password="" --database="" --query="insert into RAW from INFILE '20190805.csv.gz' FORMAT CSVWithNames"`
FYI, my env version:
_Red Hat 4.8.5-39
ClickHouse server version 23.2.3.17 (official build).
ClickHouse client version 23.2.3.17 (official build).
gzip 1.5_ | https://github.com/ClickHouse/ClickHouse/issues/47756 | https://github.com/ClickHouse/ClickHouse/pull/47796 | a2182f265985bafca4b710d01e274381c313ee3a | fd567e03a5207abbb4407601cafe729c8a4778b1 | "2023-03-20T11:58:37Z" | c++ | "2023-03-24T14:36:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,730 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "src/Interpreters/Context.cpp", "tests/queries/0_stateless/02458_use_structure_from_insertion_table.reference", "tests/queries/0_stateless/02458_use_structure_from_insertion_table.sql"] | Setting use_structure_from_insertion_table_in_table_functions doesn't work properly with virtual columns. | **Describe what's wrong**
The setting `set use_structure_from_insertion_table_in_table_functions=1` doesn't work properly
setting:
```
set use_structure_from_insertion_table_in_table_functions=1;
insert into function file(data.LineAsString) select 'Hello';
```
table with the same column name:
```
create table test (_path String) engine=Memory;
insert into test select _path from file(data.LineAsString);
Received exception:
Code: 352. DB::Exception: Block structure mismatch in (columns with identical name must have identical structure) stream: different types:
_path String String(size = 0)
_path LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))). (AMBIGUOUS_COLUMN_NAME)
```
here error makes perfect sense, since virtual column `_path` is a LowCardinality(String) but table column is `String`, however, the cause of error shows that something wrong is going on here - apparently the columns matched by name and not by position which is stated in manual. And seemingly due to this we have some wrong behavior here:
```
insert into test select CAST(_path as String) from file(data.LineAsString);
Received exception:
Code: 352. DB::Exception: Block structure mismatch in (columns with identical name must have identical structure) stream: different types:
_path String String(size = 0)
_path LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))). (AMBIGUOUS_COLUMN_NAME)
```
and even more strange here:
```
drop table test;
create table test (_path LowCardinality(String)) engine=Memory;
insert into test select _path from file(data.LineAsString);
Received exception:
Code: 80. DB::Exception: This input format is only suitable for tables with a single column of type String.: While executing File. (INCORRECT_QUERY)
```
not sure about this one - probably this is because `file()` table function produce single `String` column, and virtual columns were not taken into account.
However for table with different than `_path` column name there is no issue:
```
drop table test;
create table test (aaa String) engine=Memory;
insert into test select _path from file(data.LineAsString);
```
but from code it's seems that in this case the setting `use_structure_from_insertion_table_in_table_functions` is ignored and CAST is added.
**Does it reproduce on recent release?**
reproduces on current master
| https://github.com/ClickHouse/ClickHouse/issues/47730 | https://github.com/ClickHouse/ClickHouse/pull/47962 | 2cb82526763d9cfd90fe07c27ad3668a459c161b | 8d1924cc9a68083f063e637aa5021eb5213afc81 | "2023-03-19T21:37:43Z" | c++ | "2023-04-06T23:01:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,715 | ["src/AggregateFunctions/AggregateFunctionFactory.cpp", "src/AggregateFunctions/AggregateFunctionFactory.h", "tests/queries/0_stateless/02688_long_aggregate_function_names.reference", "tests/queries/0_stateless/02688_long_aggregate_function_names.sql"] | This query is slow | ```
SELECT hex(unhex('0aaf0a0a0a0a0a0a0a0a0000724e756c6c2c1d4e756c6c61626c4c4f4e47424c4f42756c6c2c494e4554360a0aaf0a0a0a0a0a0a0a0a0000746f57746172 744f664d690a0a0a0a0a0000746f57746172744f664d696e757465724e756c6c4f724e756c6c4e724e756c6c4fca4e756c6c4f724e756c0700000000000000494e4554360a0aaf0a0a0a0a0a0a0a0a0000724e756c6c2c1d4e756c6c61626c4c4f4e47424c4f42756c6c2c494e4554360a0aaf0a0a0a0a0a0a0a0a006c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c2c494e4554360a0aaf0a0a0a0a0a0a0a0a0000724e756c6c2c1d4e756c6c61626c4c4f4e47424c4f42756c6c2c494e4554360a0aaf0a0a0a0a0a0a0a0a0000746f57746172744f664d690a0a0a0a0a0000746f57746172744f664d696e757465724e756c4c4f4e47424c4f42756c6c2c494e4554360a0aaf0a0a0a0a0a0a0a0a0000746f57746172744f664d690a0a0a0a0a0000746f57746172744f664d696e757465724e756c6c4f724e756c6c4e724e756c6c4fca4e756c6c4f724e756c0700000000000000494e4554360a0aaf0a0a0a0a0a0a0a0a0000724e756c6c2c1d4e756c6c61626c4c4f4e47424c4f42756c6c2c494e4554360a6c6c4f724e756c6c4e724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f726c6c4f4e75724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c6c4f724e756c4554360a0aaf0a0a0a0a0a0a0a0a0000724e756c6c2c1d4e756c6c61626c4c4f4e47424c4f42756c6c2c494e4554360a0aaf 0a0a0a0a0a0a0a0a0000724e756c6c2c1d4e756c6c61626c4c4f4e47424c4f4200000065316d')::AggregateFunction(MinOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNullOrNull, Inet6))
```
Takes 10 seconds. | https://github.com/ClickHouse/ClickHouse/issues/47715 | https://github.com/ClickHouse/ClickHouse/pull/47716 | d8fb6a7878de2d91ba3564b91db620e5d2d9f71f | 02b8d2bbf8a3d30aae49f77b7e7d392ef51a5028 | "2023-03-19T08:27:56Z" | c++ | "2023-03-19T14:38:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,679 | ["src/Storages/StorageS3Settings.cpp"] | SIGSEGV while uploading to S3 for part > INT_MAX | Reproducer:
```sql
INSERT INTO FUNCTION s3('https://test.s3.us-east-1.amazonaws.com/clickhouse/test.csv', '', '', 'TSV')
SETTINGS s3_truncate_on_insert = 1, s3_max_single_part_upload_size = '10Gi'
SELECT repeat('a', 1024)
FROM numbers((pow(2, 30) * 5) / 1024)
SETTINGS s3_truncate_on_insert = 1, s3_max_single_part_upload_size = '10Gi'
```
Note, that usually it is not a problem since multi part upload is used, but when you turn it OFF you can see the problem.
And the problem is the `stringstream::gcount` breaks `stringstream` when it holds more the `INT_MAX`
Here is a patch in llvm-project - https://reviews.llvm.org/D146294 | https://github.com/ClickHouse/ClickHouse/issues/47679 | https://github.com/ClickHouse/ClickHouse/pull/48816 | 000a63d61ca4b7c08dec68db8ef026f4557a8703 | 69cac0a7882192cc8b22bba2760574c6fae2e3c8 | "2023-03-17T13:26:57Z" | c++ | "2023-04-16T12:46:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,643 | ["src/Core/Settings.h", "src/Storages/MergeTree/BackgroundJobsAssignee.cpp", "src/Storages/MergeTree/BackgroundJobsAssignee.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/MergeTree/MergeTreePartsMover.h", "tests/integration/test_move_partition_to_volume_async/__init__.py", "tests/integration/test_move_partition_to_volume_async/configs/storage_policy.xml", "tests/integration/test_move_partition_to_volume_async/test.py"] | Make ALTER TABLE MOVE PARTITION TO DISK asynchronous | ClickHouse has particular settings to make ALTER DELETE or MODIFY COLUMN asynchronous, but there is no such setting for MOVE PARTITION TO DISK.
I believe it's a good idea to add such setting for MOVE PARTITION TO DISK
| https://github.com/ClickHouse/ClickHouse/issues/47643 | https://github.com/ClickHouse/ClickHouse/pull/56809 | 8fe093c355ce7513395622a1364e70d93ff37b5d | 437a911d7b995ed7b9822a2b73cdf2406b583a1f | "2023-03-16T09:21:01Z" | c++ | "2023-11-16T11:47:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,574 | ["tests/queries/0_stateless/01300_client_save_history_when_terminated_long.expect"] | Flaky test 01300_client_save_history_when_terminated_long | 01300_client_save_history_when_terminated_long test is flaky [link](https://play.clickhouse.com/play?user=play#d2l0aCAKJyUwMTMwMF9jbGllbnRfc2F2ZV9oaXN0b3J5X3doZW5fdGVybWluYXRlZF9sb25nJScgYXMgbmFtZV9wYXR0ZXJuLAonJSUnIGFzIGNvbnRleHRfcGF0dGVybiwKJzIwMjItMDktMDEnIGFzIHN0YXJ0X2RhdGUsCig0NTQ2MSkgYXMgbm9pc3lfcHJzLAooJ1N0YXRlbGVzcyB0ZXN0cyAoYXNhbiknLCAnU3RhdGVsZXNzIHRlc3RzIChhZGRyZXNzKScsICdTdGF0ZWxlc3MgdGVzdHMgKGFkZHJlc3MsIGFjdGlvbnMpJykgYXMgYmFja3BvcnRfYW5kX3JlbGVhc2Vfc3BlY2lmaWNfY2hlY2tzCnNlbGVjdCAKdG9TdGFydE9mRGF5KGNoZWNrX3N0YXJ0X3RpbWUpIGFzIGQsCmNvdW50KCksICBncm91cFVuaXFBcnJheShwdWxsX3JlcXVlc3RfbnVtYmVyKSwgYW55KHJlcG9ydF91cmwpCmZyb20gY2hlY2tzIHdoZXJlIHN0YXJ0X2RhdGUgPD0gY2hlY2tfc3RhcnRfdGltZSBhbmQgcHVsbF9yZXF1ZXN0X251bWJlciBub3QgaW4gCihzZWxlY3QgcHVsbF9yZXF1ZXN0X251bWJlciBhcyBwcm4gZnJvbSBjaGVja3Mgd2hlcmUgcHJuIT0wIGFuZCBzdGFydF9kYXRlIDw9IGNoZWNrX3N0YXJ0X3RpbWUgYW5kIGNoZWNrX25hbWUgaW4gYmFja3BvcnRfYW5kX3JlbGVhc2Vfc3BlY2lmaWNfY2hlY2tzKSAKYW5kIHRlc3RfbmFtZSBsaWtlIG5hbWVfcGF0dGVybiBhbmQgdGVzdF9jb250ZXh0X3JhdyBpbGlrZSBjb250ZXh0X3BhdHRlcm4KYW5kIHB1bGxfcmVxdWVzdF9udW1iZXIgbm90IGluIG5vaXN5X3BycwphbmQgdGVzdF9zdGF0dXMgaW4gKCdGQUlMJywgJ0ZMQUtZJywgJ2ZhaWx1cmUnKSBncm91cCBieSBkIG9yZGVyIGJ5IGQgZGVzYw==)
https://s3.amazonaws.com/clickhouse-test-reports/47495/8a7bc3250d1009782001fa8e06f4a7c07ff84b44/stateless_tests__release__s3_storage__[2/2].html | https://github.com/ClickHouse/ClickHouse/issues/47574 | https://github.com/ClickHouse/ClickHouse/pull/47606 | 9d14c14a54c227dd37366b3adf3bcba800d341ea | e42238a4e728d0947b8697ca8f9a545a44839e0a | "2023-03-14T13:30:18Z" | c++ | "2023-03-15T11:49:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,571 | ["programs/odbc-bridge/ColumnInfoHandler.cpp"] | ODBC: Columns definition was not returned. (LOGICAL_ERROR) | ```
/var/log/clickhouse-server/clickhouse-server.err.log:2023.03.14 00:29:07.031349 [ 166170 ] {110f8654-7d7d-4b47-b6b0-3ce83414a80f} <Error> ReadWriteBufferFromHTTP: HTTP request to `http://127.0.0.1:9018/columns_info?use_connection_pooling=1&version=1&connection_string=DSN%3D%7BClickHouse%20DSN%20%28ANSI%29%7D&schema=test_15&table=t&external_table_functions_use_nulls=1` failed at try 1/1 with bytes read: 0/unknown. Error: DB::HTTPException: Received error from remote server /columns_info?use_connection_pooling=1&version=1&connection_string=DSN%3D%7BClickHouse%20DSN%20%28ANSI%29%7D&schema=test_15&table=t&external_table_functions_use_nulls=1. HTTP status code: 500 Internal Server Error, body: Error getting columns from ODBC 'Code: 49. DB::Exception: Columns definition was not returned. (LOGICAL_ERROR) (version 23.2.4.12 (official build))'
```
That was different table:
```
2023.03.14 00:29:07.025329 [ 1426 ] {13cdfcdf-cec6-4a8f-b297-9e59484684e7} <Debug> executeQuery: (from [::1]:38322) SELECT database AS TABLE_CAT, '' AS TABLE_SCHEM, table AS TABLE_NAME, name AS COLUMN_NAME, 0 AS DATA_TYPE, type AS TYPE_NAME, 0 AS COLUMN_SIZE, 0 AS BUFFER_LENGTH, 0 AS DECIMAL_DIGITS, 0 AS NUM_PREC_RADIX, 0 AS NULLABLE, 0 AS REMARKS, 0 AS COLUMN_D2023.03.14 00:24:34.437510 [ 166135 ] {dc564bda-d591-4f81-8390-0eba8fb0c647} <Debug> executeQuery: (from [::1]:59812) (comment: 00875_join_right_nulls_ors.sql) CREATE TABLE t (x String) ENGINE = Log(); (stage: Complete)
...
2023.03.14 00:29:06.814632 [ 166283 ] {3e116a7c-21a7-49e0-bd50-d82dfd4dc9ca} <Error> executeQuery: Code: 57. DB::Exception: Table test_15.t already exists. (TABLE_ALREADY_EXISTS) (version 23.2.4.12 (official build)) (from [::1]:34616) (comment: 01086_odbc_roundtrip.sh) (in query: CREATE TABLE t (x UInt8, y Float32, z String) ENGINE = Memory), Stack trace (when copying this message, always include the lines below):
...
2023.03.14 00:29:07.141725 [ 166170 ] {110f8654-7d7d-4b47-b6b0-3ce83414a80f} <Error> executeQuery: Code: 86. DB::HTTPException: Received error from remote server /columns_info?use_connection_pooling=1&version=1&connection_string=DSN%3D%7BClickHouse%20DSN%20%28ANSI%29%7D&schema=test_15&table=t&external_table_functions_use_nulls=1. HTTP status code: 500 Internal Server Error, body: Error getting columns from ODBC 'Code: 49. DB::Exception: Columns definition was not returned. (LOGICAL_ERROR) (version 23.2.4.12 (official build))'
```
And some ODBC logs:
```
2023.03.14 00:29:06.974322 [ 1426 ] {12bc67a9-6484-451a-bd7d-8f00113d8dd9} <Debug> executeQuery: (from [::1]:38322) SELECT CAST(database, 'Nullable(String)') AS TABLE_CAT, CAST(NULL, 'Nullable(String)') AS TABLE_SCHEM, CAST(name, 'Nullable(String)') AS TABLE_NAME, CAST('TABLE', 'Nullable(String)') AS TABLE_TYPE, CAST(NULL, 'Nullable(String)') AS REMARKS FROM system.tables WHERE (1 == 1) AND isNotNull(TABLE_CAT) AND coalesce(TABLE_CAT, '') LIKE 'test_15' AND isNotNull(TABLE_NAME) AND coalesce(TABLE_NAME, '') LIKE 't' ORDER BY TABLE_TYPE, TABLE_CAT, TABLE_SCHEM, TABLE_NAME (stage: Complete)
...
2023.03.14 00:29:07.024070 [ 1426 ] {12bc67a9-6484-451a-bd7d-8f00113d8dd9} <Information> executeQuery: Read 1 rows, 26.00 B in 0.049966 sec., 20.013609254292916 rows/sec., 520.35 B/sec.
...
2023.03.14 00:29:07.025329 [ 1426 ] {13cdfcdf-cec6-4a8f-b297-9e59484684e7} <Debug> executeQuery: (from [::1]:38322) SELECT database AS TABLE_CAT, '' AS TABLE_SCHEM, table AS TABLE_NAME, name AS COLUMN_NAME, 0 AS DATA_TYPE, type AS TYPE_NAME, 0 AS COLUMN_SIZE, 0 AS BUFFER_LENGTH, 0 AS DECIMAL_DIGITS, 0 AS NUM_PREC_RADIX, 0 AS NULLABLE, 0 AS REMARKS, 0 AS COLUMN_DEF, 0 AS SQL_DATA_TYPE, 0 AS SQL_DATETIME_SUB, 0 AS CHAR_OCTET_LENGTH, 0 AS ORDINAL_POSITION, 0 AS IS_NULLABLE FROM system.columns WHERE (1 == 1) AND isNotNull(TABLE_CAT) AND coalesce(TABLE_CAT, '') LIKE 'test_15' AND TABLE_NAME LIKE 't' ORDER BY TABLE_CAT, TABLE_SCHEM, TABLE_NAME, ORDINAL_POSITION (stage: Complete)
```
And no log with `Read {} rows`, so there was no results.
Table should be removed between query to `system.tables` and `system.columns` by ODBC bridge, however I cannot find this in the query log.
CI: https://s3.amazonaws.com/clickhouse-test-reports/47541/3d247b8635da44bccfdeb5fcd53be7130b8d0a32/upgrade_check__msan_.html | https://github.com/ClickHouse/ClickHouse/issues/47571 | https://github.com/ClickHouse/ClickHouse/pull/47573 | 8456e5721d9d98b4910527e04e33516fe386ae85 | 9d14c14a54c227dd37366b3adf3bcba800d341ea | "2023-03-14T13:19:10Z" | c++ | "2023-03-15T11:31:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,533 | ["src/Interpreters/ActionsVisitor.cpp", "src/Interpreters/evaluateConstantExpression.cpp", "src/Parsers/ASTSetQuery.cpp", "src/Parsers/ASTSetQuery.h", "tests/queries/0_stateless/02815_logical_error_cannot_get_column_name_of_set.reference", "tests/queries/0_stateless/02815_logical_error_cannot_get_column_name_of_set.sql"] | trying to get name of not a column: Set for table function | ```sql
SELECT count() FROM mysql(mysql('127.0.0.1:9004', currentDatabase(), foo, 'default', '', SETTINGS connection_pool_size = 1), '127.0.0.1:9004', currentDatabase(), foo, '', '')
```
https://s3.amazonaws.com/clickhouse-test-reports/47316/027db789d6a40fa11a3dfbc93bbda1825a8371d6/fuzzer_astfuzzerasan/report.html
https://s3.amazonaws.com/clickhouse-test-reports/47316/027db789d6a40fa11a3dfbc93bbda1825a8371d6/fuzzer_astfuzzerubsan/report.html
| https://github.com/ClickHouse/ClickHouse/issues/47533 | https://github.com/ClickHouse/ClickHouse/pull/52158 | 89552bbde5f7f452bb07c6b5c9fb1da324524620 | 482c8b5cde896ee4d84e4b8886c8a0726b4e0784 | "2023-03-13T14:18:11Z" | c++ | "2023-07-18T19:17:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,530 | ["docs/en/sql-reference/functions/string-search-functions.md", "src/Parsers/ExpressionListParsers.cpp", "tests/queries/0_stateless/25401_regexp_operator.reference", "tests/queries/0_stateless/25401_regexp_operator.sql"] | MySQL compatibility: REGEXP operator support | **Use case**
_NB: [the cell towers dataset](https://clickhouse.com/docs/en/getting-started/example-datasets/cell-towers/) is used here as an example._
When using Looker Studio with ClickHouse as a pseudo-MySQL data source, the following generated query fails because [REGEXP](https://dev.mysql.com/doc/refman/8.0/en/regexp.html#operator_regexp) operator is not supported yet:
```
SELECT AVG(area) AS qt_3rarca233c, COUNT(1) AS qt_6er8nps33c, MAX(range) AS qt_6huada233c, SUM(area) AS qt_b9ehfa233c, MIN(range) AS qt_ufjdha233c, radio
FROM (SELECT * FROM datasets.cell_towers) AS t
WHERE (radio REGEXP '^^C.*$')
GROUP BY radio;
```
This limits the usage of the string-type filters in Looker Studio.
Full stack trace:
```
2023.03.10 20:18:39.173880 [ 509 ] {mysql:2648:8e360bc1-262a-4bf6-a488-f612f1c0a444} <Debug> executeQuery: (from 74.125.88.49:37438) SELECT AVG(area) AS qt_3rarca233c, COUNT(1) AS qt_6er8nps33c, MAX(range) AS qt_6huada233c, SUM(area) AS qt_b9ehfa233c, MIN(range) AS qt_ufjdha233c, radio FROM (SELECT * FROM datasets.cell_towers) AS t WHERE (radio REGEXP '^^C.*$') GROUP BY radio; (stage: Complete)
2023.03.10 20:18:39.174241 [ 509 ] {mysql:2648:8e360bc1-262a-4bf6-a488-f612f1c0a444} <Error> executeQuery: Code: 62. DB::Exception: Syntax error: failed at position 215 ('REGEXP'): REGEXP '^^C.*$') GROUP BY radio;. Expected one of: token, Dot, Comma, ClosingRoundBracket, OR, AND, BETWEEN, NOT BETWEEN, LIKE, ILIKE, NOT LIKE, NOT ILIKE, IN, NOT IN, GLOBAL IN, GLOBAL NOT IN, MOD, DIV, IS NULL, IS NOT NULL, alias, AS. (SYNTAX_ERROR) (version 23.3.1.387 (official build)) (from 74.125.88.49:37438) (in query: SELECT AVG(area) AS qt_3rarca233c, COUNT(1) AS qt_6er8nps33c, MAX(range) AS qt_6huada233c, SUM(area) AS qt_b9ehfa233c, MIN(range) AS qt_ufjdha233c, radio FROM (SELECT * FROM datasets.cell_towers) AS t WHERE (radio REGEXP '^^C.*$') GROUP BY radio;), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0cb6d5 in /usr/bin/clickhouse
1. ? @ 0x8ccf3ed in /usr/bin/clickhouse
2. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, bool, unsigned long, unsigned long) @ 0x14ddbd5f in /usr/bin/clickhouse
3. ? @ 0x13994a3c in /usr/bin/clickhouse
4. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (DB::QueryResultDetails const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x1399cf32 in /usr/bin/clickhouse
5. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x146f5a13 in /usr/bin/clickhouse
6. DB::MySQLHandler::run() @ 0x146f2502 in /usr/bin/clickhouse
7. Poco::Net::TCPServerConnection::start() @ 0x176321b4 in /usr/bin/clickhouse
8. Poco::Net::TCPServerDispatcher::run() @ 0x176333db in /usr/bin/clickhouse
9. Poco::PooledThread::run() @ 0x177bac87 in /usr/bin/clickhouse
10. Poco::ThreadImpl::runnableEntry(void*) @ 0x177b86bd in /usr/bin/clickhouse
11. ? @ 0x7fcbd2203609 in ?
12. __clone @ 0x7fcbd2128133 in ?
2023.03.10 20:18:39.174890 [ 509 ] {} <Error> MySQLHandler: MySQLHandler: Cannot read packet: : Code: 62. DB::Exception: Syntax error: failed at position 215 ('REGEXP'): REGEXP '^^C.*$') GROUP BY radio;. Expected one of: token, Dot, Comma, ClosingRoundBracket, OR, AND, BETWEEN, NOT BETWEEN, LIKE, ILIKE, NOT LIKE, NOT ILIKE, IN, NOT IN, GLOBAL IN, GLOBAL NOT IN, MOD, DIV, IS NULL, IS NOT NULL, alias, AS. (SYNTAX_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0cb6d5 in /usr/bin/clickhouse
1. ? @ 0x8ccf3ed in /usr/bin/clickhouse
2. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, bool, unsigned long, unsigned long) @ 0x14ddbd5f in /usr/bin/clickhouse
3. ? @ 0x13994a3c in /usr/bin/clickhouse
4. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (DB::QueryResultDetails const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x1399cf32 in /usr/bin/clickhouse
5. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x146f5a13 in /usr/bin/clickhouse
6. DB::MySQLHandler::run() @ 0x146f2502 in /usr/bin/clickhouse
7. Poco::Net::TCPServerConnection::start() @ 0x176321b4 in /usr/bin/clickhouse
8. Poco::Net::TCPServerDispatcher::run() @ 0x176333db in /usr/bin/clickhouse
9. Poco::PooledThread::run() @ 0x177bac87 in /usr/bin/clickhouse
10. Poco::ThreadImpl::runnableEntry(void*) @ 0x177b86bd in /usr/bin/clickhouse
11. ? @ 0x7fcbd2203609 in ?
12. __clone @ 0x7fcbd2128133 in ?
(version 23.3.1.387 (official build))
```
**Describe the solution you'd like**
[REGEXP](https://dev.mysql.com/doc/refman/8.0/en/regexp.html#operator_regexp) operator is supported via MySQL wire protocol.
CC @alexey-milovidov @mshustov | https://github.com/ClickHouse/ClickHouse/issues/47530 | https://github.com/ClickHouse/ClickHouse/pull/47869 | 41e3d876d59d495cd3c05f9a96e91cc3547d8dac | 784b34f5d86cb78cc0730944bbc6c96e59726d61 | "2023-03-13T12:17:32Z" | c++ | "2023-03-22T09:31:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,529 | ["src/Functions/positionCaseInsensitive.cpp", "tests/queries/0_stateless/02680_instr_alias_for_position_case_insensitive.reference", "tests/queries/0_stateless/02680_instr_alias_for_position_case_insensitive.sql"] | MySQL compatibility: INSTR function support | **Use case**
_NB: [the cell towers dataset](https://clickhouse.com/docs/en/getting-started/example-datasets/cell-towers/) is used here as an example._
When using Looker Studio with ClickHouse as a pseudo-MySQL data source, the following query fails because [INSTR](https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_instr) is not supported yet:
```
SELECT radio, mcc, net, area, cell, unit, lon, lat, range, samples, changeable, created, updated, averageSignal
FROM datasets.cell_towers
WHERE INSTR(radio, 'M') > 0
```
This limits the usage of the string-type filters in Looker Studio.
Full stack trace:
```
2023.03.10 20:12:34.022342 [ 512 ] {mysql:2634:6c4d87ce-4b56-4ddb-80a7-92d81a69eb9b} <Error> executeQuery: Code: 46. DB::Exception: Unknown function INSTR: While processing SELECT radio, mcc, net, area, cell, unit, lon, lat, range, samples, changeable, created, updated, averageSignal FROM datasets.cell_towers WHERE INSTR(radio, 'M') > 0. (UNKNOWN_FUNCTION) (version 23.3.1.387 (official build)) (from 74.125.88.50:36828) (in query: SELECT AVG(area) AS qt_3rarca233c, COUNT(1) AS qt_6er8nps33c, MAX(range) AS qt_6huada233c, SUM(area) AS qt_b9ehfa233c, MIN(range) AS qt_ufjdha233c, radio FROM (SELECT * FROM datasets.cell_towers) AS t WHERE (INSTR(radio, 'M') > 0) GROUP BY radio;), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0cb6d5 in /usr/bin/clickhouse
1. ? @ 0xe17079e in /usr/bin/clickhouse
2. DB::FunctionFactory::getImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>) const @ 0x123d659f in /usr/bin/clickhouse
3. DB::FunctionFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>) const @ 0x123d70ca in /usr/bin/clickhouse
4. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x12d13746 in /usr/bin/clickhouse
5. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x12d14369 in /usr/bin/clickhouse
6. DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x12d10941 in /usr/bin/clickhouse
7. ? @ 0x12d07e55 in /usr/bin/clickhouse
8. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x12ce691b in /usr/bin/clickhouse
9. DB::KeyCondition::getBlockWithConstants(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::TreeRewriterResult const> const&, std::__1::shared_ptr<DB::Context const>) @ 0x1421db35 in /usr/bin/clickhouse
10. DB::MergeTreeWhereOptimizer::MergeTreeWhereOptimizer(DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, unsigned long>>>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, Poco::Logger*) @ 0x1444474a in /usr/bin/clickhouse
11. ? @ 0x135e89e3 in /usr/bin/clickhouse
12. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135e42f9 in /usr/bin/clickhouse
13. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136761a2 in /usr/bin/clickhouse
14. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x1367414a in /usr/bin/clickhouse
15. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136734ec in /usr/bin/clickhouse
16. ? @ 0x135e99f2 in /usr/bin/clickhouse
17. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135e42f9 in /usr/bin/clickhouse
18. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136761a2 in /usr/bin/clickhouse
19. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x1367414a in /usr/bin/clickhouse
20. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x1359ddd0 in /usr/bin/clickhouse
21. ? @ 0x13996cc0 in /usr/bin/clickhouse
22. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (DB::QueryResultDetails const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x1399cf32 in /usr/bin/clickhouse
23. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x146f5a13 in /usr/bin/clickhouse
24. DB::MySQLHandler::run() @ 0x146f2502 in /usr/bin/clickhouse
25. Poco::Net::TCPServerConnection::start() @ 0x176321b4 in /usr/bin/clickhouse
26. Poco::Net::TCPServerDispatcher::run() @ 0x176333db in /usr/bin/clickhouse
27. Poco::PooledThread::run() @ 0x177bac87 in /usr/bin/clickhouse
28. Poco::ThreadImpl::runnableEntry(void*) @ 0x177b86bd in /usr/bin/clickhouse
29. ? @ 0x7fcbd2203609 in ?
30. __clone @ 0x7fcbd2128133 in ?
2023.03.10 20:12:34.022723 [ 512 ] {} <Error> MySQLHandler: MySQLHandler: Cannot read packet: : Code: 46. DB::Exception: Unknown function INSTR: While processing SELECT radio, mcc, net, area, cell, unit, lon, lat, range, samples, changeable, created, updated, averageSignal FROM datasets.cell_towers WHERE INSTR(radio, 'M') > 0. (UNKNOWN_FUNCTION), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0cb6d5 in /usr/bin/clickhouse
1. ? @ 0xe17079e in /usr/bin/clickhouse
2. DB::FunctionFactory::getImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>) const @ 0x123d659f in /usr/bin/clickhouse
3. DB::FunctionFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>) const @ 0x123d70ca in /usr/bin/clickhouse
4. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x12d13746 in /usr/bin/clickhouse
5. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x12d14369 in /usr/bin/clickhouse
6. DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x12d10941 in /usr/bin/clickhouse
7. ? @ 0x12d07e55 in /usr/bin/clickhouse
8. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x12ce691b in /usr/bin/clickhouse
9. DB::KeyCondition::getBlockWithConstants(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::TreeRewriterResult const> const&, std::__1::shared_ptr<DB::Context const>) @ 0x1421db35 in /usr/bin/clickhouse
10. DB::MergeTreeWhereOptimizer::MergeTreeWhereOptimizer(DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, unsigned long>>>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, Poco::Logger*) @ 0x1444474a in /usr/bin/clickhouse
11. ? @ 0x135e89e3 in /usr/bin/clickhouse
12. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135e42f9 in /usr/bin/clickhouse
13. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136761a2 in /usr/bin/clickhouse
14. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x1367414a in /usr/bin/clickhouse
15. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136734ec in /usr/bin/clickhouse
16. ? @ 0x135e99f2 in /usr/bin/clickhouse
17. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135e42f9 in /usr/bin/clickhouse
18. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136761a2 in /usr/bin/clickhouse
19. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x1367414a in /usr/bin/clickhouse
20. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x1359ddd0 in /usr/bin/clickhouse
21. ? @ 0x13996cc0 in /usr/bin/clickhouse
22. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (DB::QueryResultDetails const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x1399cf32 in /usr/bin/clickhouse
23. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x146f5a13 in /usr/bin/clickhouse
24. DB::MySQLHandler::run() @ 0x146f2502 in /usr/bin/clickhouse
25. Poco::Net::TCPServerConnection::start() @ 0x176321b4 in /usr/bin/clickhouse
26. Poco::Net::TCPServerDispatcher::run() @ 0x176333db in /usr/bin/clickhouse
27. Poco::PooledThread::run() @ 0x177bac87 in /usr/bin/clickhouse
28. Poco::ThreadImpl::runnableEntry(void*) @ 0x177b86bd in /usr/bin/clickhouse
29. ? @ 0x7fcbd2203609 in ?
30. __clone @ 0x7fcbd2128133 in ?
(version 23.3.1.387 (official build))
```
**Describe the solution you'd like**
INSTR function is supported via MySQL wire protocol.
CC @alexey-milovidov @mshustov
| https://github.com/ClickHouse/ClickHouse/issues/47529 | https://github.com/ClickHouse/ClickHouse/pull/47535 | e443c4e68217c8264be9e3f2b78362f5b4b5e881 | 82a6d75050513be4f677e912b84a4eada0e94f27 | "2023-03-13T12:09:23Z" | c++ | "2023-03-14T21:00:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,485 | ["src/Common/Config/ConfigProcessor.cpp", "src/Common/Config/ConfigProcessor.h"] | executable UDF config conflicts with files in `conf.d` directory | Hi there.
I have defined a external executable function in `/etc/clickhouse-server/myfunc_function.xml`, with content:
```xml
<functions>
<function>
<type>executable</type>
<name>myfunc</name>
<return_type>String</return_type>
<argument>
<type>String</type>
<name>cmd</name>
</argument>
<format>RawBLOB</format>
<command>ls /etc/clichouse-server</command>
<execute_direct>0</execute_direct>
</function>
</functions>
```
but it conflicts with my existing config `/etc/clickhouse-server/conf.d/chop-generated-macros.xml`, with content:
```xml
<yandex>
<macros>
<installation>cluster-1</installation>
<all-sharded-shard>4</all-sharded-shard>
<cluster>cluster</cluster>
<shard>2</shard>
<replica>chi-cluster-1-cluster-2-0</replica>
</macros>
</yandex>
```
detailed trace back:
```
2023.03.12 01:57:37.568201 [ 59 ] {} <Error> ExternalUserDefinedExecutableFunctionsLoader: Failed to load config file '/etc/clickhouse-server/myfunc_function.xml': Poco::Exception. Code: 1000, e.code() = 0, Exception: Failed to merge config with '/etc/clickhouse-server/conf.d/chop-generated-macros.xml': Exception: Root element doesn't have the corresponding root element as the config file. It must be <functions>, Stack trace (when copying this message, always include the lines below):
0. DB::ConfigProcessor::processConfig(bool*, zkutil::ZooKeeperNodeCache*, std::__1::shared_ptr<Poco::Event> const&) @ 0x16160355 in /usr/bin/clickhouse
1. DB::ConfigProcessor::loadConfig(bool) @ 0x16160634 in /usr/bin/clickhouse
2. DB::ExternalLoaderXMLConfigRepository::load(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x14a14e84 in /usr/bin/clickhouse
3. DB::ExternalLoader::LoadablesConfigReader::readFileInfo(DB::ExternalLoader::LoadablesConfigReader::FileInfo&, DB::IExternalLoaderConfigRepository&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x14a0826f in /usr/bin/clickhouse
4. DB::ExternalLoader::LoadablesConfigReader::readRepositories(std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0x14a036d9 in /usr/bin/clickhouse
5. DB::ExternalLoader::LoadablesConfigReader::read() @ 0x149f5bc5 in /usr/bin/clickhouse
6. DB::ExternalLoader::PeriodicUpdater::doPeriodicUpdates() @ 0x14a0f2be in /usr/bin/clickhouse
7. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::ExternalLoader::PeriodicUpdater::*)(), DB::ExternalLoader::PeriodicUpdater*>(void (DB::ExternalLoader::PeriodicUpdater::*&&)(), DB::ExternalLoader::PeriodicUpdater*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x14a11027 in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa49fb48 in /usr/bin/clickhouse
9. ? @ 0xa4a2d5d in /usr/bin/clickhouse
10. ? @ 0x7f9181fa5609 in ?
11. clone @ 0x7f9181eca133 in ?
(version 22.9.1.10)
```
It seems unreasonable, so how can I avoid this conflict ? | https://github.com/ClickHouse/ClickHouse/issues/47485 | https://github.com/ClickHouse/ClickHouse/pull/52770 | 91e67c105f6a39d08cd982d252a6a3b034ad4f4a | d8a55b25c00103faf55e768e38646142c223e32f | "2023-03-11T18:08:42Z" | c++ | "2023-07-30T04:21:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,458 | ["src/Formats/ReadSchemaUtils.cpp"] | Allow empty column names in CSVWithNames/TSVWithNames | Sophia Air Quality Dataset
Download: https://www.kaggle.com/datasets/hmavrodiev/sofia-air-quality-dataset
License: https://opendatacommons.org/licenses/odbl/1-0/
`tar xvf sofia-air-quality-dataset.tar.zst`
CSV files have the following header:
```
,sensor_id,location,lat,lon,timestamp,pressure,temperature,humidity
```
And while DESCRIBE works:
```
milovidov-desktop :) DESC file('*.csv', CSVWithNames)
DESCRIBE TABLE file('*.csv', CSVWithNames)
Query id: c8a4145f-d42d-46db-8612-535b50f1bb5d
ββnameβββββββββ¬βtypeβββββββββββββββββββββ¬βdefault_typeββ¬βdefault_expressionββ¬βcommentββ¬βcodec_expressionββ¬βttl_expressionββ
β β Nullable(Int64) β β β β β β
β sensor_id β Nullable(Int64) β β β β β β
β location β Nullable(Int64) β β β β β β
β lat β Nullable(Float64) β β β β β β
β lon β Nullable(Float64) β β β β β β
β timestamp β Nullable(DateTime64(9)) β β β β β β
β pressure β Nullable(Float64) β β β β β β
β temperature β Nullable(Float64) β β β β β β
β humidity β Nullable(Float64) β β β β β β
βββββββββββββββ΄ββββββββββββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββ
```
Parsing them yields an error:
```
milovidov-desktop :) SELECT * FROM file('*.csv', CSVWithNames)
SELECT *
FROM file('*.csv', CSVWithNames)
Query id: d9b19aa6-a2c1-4e87-b759-1a6f7cb184a5
0 rows in set. Elapsed: 0.173 sec.
Received exception:
Code: 42. DB::Exception: Expected not empty name: While processing ``: While processing SELECT ``, sensor_id, location, lat, lon, timestamp, pressure, temperature, humidity FROM file('*.csv', 'CSVWithNames'). (NUMBER_OF_ARGUMENTS_DOESNT_MATCH)
```
And it does not recognize the format as CSVWithNames by default.
### Proposed solution
Option 1: Use `c1` or the first unused `cN` as the column name instead of empty.
Option 2: Ignore empty columns from the file.
| https://github.com/ClickHouse/ClickHouse/issues/47458 | https://github.com/ClickHouse/ClickHouse/pull/47496 | 6d45d0c37404f4d3a7bd03c92e6a2f04adef7beb | 3460667cac732d3ab67a57b06a6d4cdbe81ffe76 | "2023-03-11T02:36:28Z" | c++ | "2023-05-31T11:24:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,435 | ["src/Functions/URL/decodeURLComponent.cpp", "tests/queries/0_stateless/02677_decode_url_component.reference", "tests/queries/0_stateless/02677_decode_url_component.sql"] | encodeURLComponent makes incorrect results for cyrillic characters | **How to reproduce**
```SQL
SELECT
encodeURLComponent('ΠΊΠ»ΠΈΠΊΡ
Π°ΡΡ') AS encoded,
decodeURLComponent(encoded) = 'ΠΊΠ»ΠΈΠΊΡ
Π°ΡΡ' AS expected_EQ
```
**Actual result**
```
ββencodedββββββββββββββββββββββββββββββββββββββ¬βexpected_EQββ
β %a0%A%a0%B%a0%8%a0%A%a1%g5%a0%0%a1%g3%a1%g1 β 0 β
βββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββ
```
**Expected result**
```
ββencodedβββββββββββββββββββββββββββββββββββββββββββ¬βexpected_EQββ
β %D0%BA%D0%BB%D0%B8%D0%BA%D1%85%D0%B0%D1%83%D1%81 β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββ
```
**Additional context**
```
SELECT version()
ββversion()ββ
β 23.2.3.17 β
βββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/47435 | https://github.com/ClickHouse/ClickHouse/pull/47457 | 8e4112b1cea6c2ce13de644a9b8f967cfe5afbaf | 9de300d8f7e4b3d0532f3e4a587cef27c899cc8c | "2023-03-10T15:52:56Z" | c++ | "2023-03-11T12:40:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,420 | ["docs/en/operations/settings/merge-tree-settings.md", "src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp", "src/Storages/MergeTree/MergeTreeSettings.cpp", "src/Storages/MergeTree/MergeTreeSettings.h", "tests/queries/0_stateless/01419_merge_tree_settings_sanity_check.sql"] | Setup to implement "archive" merges that merges high level parts that are old | The current merge algorithm follows a tree approach. It works well, but this means that some data will not be merged for a very long period of time.
To force merging old data, there is a setting, min_age_to_force_merge_seconds that can be applied per table. On paper this setting works perfectly, but it is also a timebomb. As i understand, force merges means "queue merge immediatly as soon as the condition fires". There is no way to limit concurrency of force merges, so any spike of ingest can trigger a bomb months later where suddently "archive" merges will starve normal merges, and there is no way to smooth that.
Does this usecase make sense ? Or is it because mergeSelecting settings are not tuned well enough for the data ? I think something like max_number_of_merges_force_in_pool would solve this.
| https://github.com/ClickHouse/ClickHouse/issues/47420 | https://github.com/ClickHouse/ClickHouse/pull/53405 | 75d32bfe771db20a2698d6a6e5d9484e3cf747f5 | cbbb81f5ccf94e7549fd5608f9c157adc63a8dbd | "2023-03-10T08:55:54Z" | c++ | "2023-08-24T17:49:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,415 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md", "src/Disks/StoragePolicy.cpp", "src/Disks/VolumeJBOD.cpp", "src/Disks/VolumeJBOD.h", "tests/integration/test_jbod_load_balancing/configs/config.d/storage_configuration.xml", "tests/integration/test_jbod_load_balancing/test.py"] | What's the time of disk swifting to another in least_used disk policy | HI:
We configed 2 disk in a volume and the load_balance is least_used.Our db have only mergetree engine table and
observed that the disk switch is performed only when one disk is about 50G more than the remaining space of the other disk. But I see the source code of clickhouse, when reserve new part, it will compare the unreserved space between two disks and chose new disk. So I think the disk switch shouldβt be so slowly. What's the exact condition or time when the write switch from one disk to another?
The following graph shows free space of our two disk that ClickHouse is writing:
<img width="1117" alt="image" src="https://user-images.githubusercontent.com/46508720/224207370-0d545cd6-37f8-4b6a-8fc8-5cd72fcc3183.png">
| https://github.com/ClickHouse/ClickHouse/issues/47415 | https://github.com/ClickHouse/ClickHouse/pull/56030 | 3631e476eb3cdc438739d1bfb8cd351268391534 | 58824fb02b174c8ccfa5a14c7db238f43e228686 | "2023-03-10T02:50:19Z" | c++ | "2023-10-30T08:04:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,393 | ["src/Storages/MergeTree/MutateTask.cpp", "tests/queries/0_stateless/02346_inverted_index_mutation.reference", "tests/queries/0_stateless/02346_inverted_index_mutation.sql"] | inverted index bug on segment id | **Describe what's wrong**
As in title.
**Does it reproduce on recent release?**
Yes.
**How to reproduce**
```sql
set allow_experimental_inverted_index=1;
CREATE TABLE t
(
`timestamp` UInt64,
`s` String,
INDEX idx lower(s) TYPE inverted(3) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY toDate(timestamp) -- partitioning may be necessary
ORDER BY toDate(timestamp)
SETTINGS min_rows_for_wide_part = 1, min_bytes_for_wide_part = 1; -- these settings are necessary
INSERT INTO t (s) SELECT * FROM generateRandom('s String') LIMIT 100;
-- do update column
ALTER TABLE t UPDATE flag=1 WHERE 1;
-- wait for mutation to finish on at least 1 part
SELECT sleepEachRow(1) FROM t LIMIT 3;
SELECT * FROM t WHERE lower(s) LIKE '%iamok%';
```
see [ClickFiddle](https://fiddle.clickhouse.com/8eb0e0ed-8ec2-4326-8880-3ecb0bec1747)
result:
```
[chi-datalake-ck-cluster-2-0-0] 2023.03.09 16:34:23.859618 [ 388 ] {f78d765e-4cec-4cfd-a49c-6379667f915b} <Error> executeQuery: Code: 49. DB::Exception: Invalid segment id 1. (LOGICAL_ERROR) (version 23.2.3.17 (official build)) (from 127.0.0.1:43318) (in query: SELECT * FROM t WHERE lower(s) LIKE '%iamnotok%'), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe0c67d5 in /usr/bin/clickhouse
1. ? @ 0xee7502f in /usr/bin/clickhouse
2. DB::GinIndexStoreDeserializer::readSegmentDictionary(unsigned int) @ 0x141f106a in /usr/bin/clickhouse
3. DB::GinIndexStoreFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::IDataPartStorage const>) @ 0x141f2e36 in /usr/bin/clickhouse
4. DB::MergeTreeDataSelectExecutor::filterMarksUsingIndex(std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::shared_ptr<DB::IMergeTreeIndexCondition>, std::__1::shared_ptr<DB::IMergeTreeDataPart const>, DB::MarkRanges const&, DB::Settings const&, DB::MergeTreeReaderSettings const&, unsigned long&, unsigned long&, DB::MarkCache*, DB::UncompressedCache*, Poco::Logger*) @ 0x14330bcc in /usr/bin/clickhouse
5. ? @ 0x1432d83d in /usr/bin/clickhouse
6. DB::MergeTreeDataSelectExecutor::filterPartsByPrimaryKeyAndSkipIndexes(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>&&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const> const&, DB::KeyCondition const&, DB::MergeTreeReaderSettings const&, Poco::Logger*, unsigned long, std::__1::vector<DB::ReadFromMergeTree::IndexStat, std::__1::allocator<DB::ReadFromMergeTree::IndexStat>>&, bool) @ 0x1432b1bb in /usr/bin/clickhouse
7. DB::ReadFromMergeTree::selectRangesToReadImpl(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, DB::MergeTreeData const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, bool, Poco::Logger*) @ 0x14aea48e in /usr/bin/clickhouse
8. DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::shared_ptr<DB::PrewhereInfo> const&, DB::ActionDAGNodes const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, DB::MergeTreeData const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, bool, Poco::Logger*) @ 0x14ae875f in /usr/bin/clickhouse
9. DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>) const @ 0x14ae7366 in /usr/bin/clickhouse
10. DB::ReadFromMergeTree::getAnalysisResult() const @ 0x14aed196 in /usr/bin/clickhouse
11. DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x14aef2a9 in /usr/bin/clickhouse
12. DB::ISourceStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x14ab4814 in /usr/bin/clickhouse
13. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x14ace149 in /usr/bin/clickhouse
14. DB::InterpreterSelectWithUnionQuery::execute() @ 0x1365c72d in /usr/bin/clickhouse
15. ? @ 0x1397b559 in /usr/bin/clickhouse
16. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x1397862d in /usr/bin/clickhouse
17. DB::TCPHandler::runImpl() @ 0x146ff144 in /usr/bin/clickhouse
18. DB::TCPHandler::run() @ 0x14713979 in /usr/bin/clickhouse
19. Poco::Net::TCPServerConnection::start() @ 0x17614454 in /usr/bin/clickhouse
20. Poco::Net::TCPServerDispatcher::run() @ 0x1761567b in /usr/bin/clickhouse
21. Poco::PooledThread::run() @ 0x1779ca07 in /usr/bin/clickhouse
22. Poco::ThreadImpl::runnableEntry(void*) @ 0x1779a43d in /usr/bin/clickhouse
23. ? @ 0x7f8809b7d609 in ?
24. clone @ 0x7f8809aa2133 in ?
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 23.2.3):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid segment id 1. (LOGICAL_ERROR)
```
| https://github.com/ClickHouse/ClickHouse/issues/47393 | https://github.com/ClickHouse/ClickHouse/pull/47663 | a02dedbb6a139de25ff1168a2c41d59b77b96032 | e633f36ce91a2ae335010548638aac79bca3aef9 | "2023-03-09T08:44:22Z" | c++ | "2023-10-09T11:31:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,366 | ["tests/queries/0_stateless/02923_join_use_nulls_modulo.reference", "tests/queries/0_stateless/02923_join_use_nulls_modulo.sql"] | Logical error: 'Arguments of 'modulo' have incorrect data types (with `join_use_nulls`) | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/47310/86410a7a3f8a9241413f8a36d3bda04e003e17da/fuzzer_astfuzzerasan/report.html\r\n\r\n```\r\ndell9510 :) set join_use_nulls=1\r\n\r\nSET join_use_nulls = 1\r\n\r\nQuery id: 32112886-0261-4ec1-a16f-5445e1df1080\r\n\r\nOk.\r\n\r\n0 rows in set. Elapsed: 0.003 sec. \r\n\r\ndell9510 :) SELECT id % 255, toTypeName(d.id) FROM (SELECT toLowCardinality(1048577) AS id, toLowCardinality(9223372036854775807) AS value GROUP BY GROUPING SETS ((toLowCardinality(1024)), (id % 10.0001), ((id % 2147483646) != -9223372036854775807), ((id % -1) != 255))) AS a SEMI LEFT JOIN (SELECT toLowCardinality(9223372036854775807) AS id WHERE (id % 2147483646) != NULL) AS d USING (id)\r\n\r\nSELECT\r\n id % 255,\r\n toTypeName(d.id)\r\nFROM\r\n(\r\n SELECT\r\n toLowCardinality(1048577) AS id,\r\n toLowCardinality(9223372036854775807) AS value\r\n GROUP BY\r\n GROUPING SETS (\r\n (toLowCardinality(1024)),\r\n (id % 10.0001),\r\n ((id % 2147483646) != -9223372036854775807),\r\n ((id % -1) != 255))\r\n) AS a\r\nSEMI LEFT JOIN\r\n(\r\n SELECT toLowCardinality(9223372036854775807) AS id\r\n WHERE (id % 2147483646) != NULL\r\n) AS d USING (id)\r\n\r\nQuery id: ffb50b8f-4fb0-40bb-ac55-df2dc3e2b16d\r\n\r\n[dell9510] 2023.03.08 15:38:24.265166 [ 687205 ] {ffb50b8f-4fb0-40bb-ac55-df2dc3e2b16d} <Fatal> : Logical error: 'Arguments of 'modulo' have incorrect data types: 'id' of type 'UInt64', '255' of type 'UInt8''.\r\n[dell9510] 2023.03.08 15:38:24.266787 [ 687522 ] <Fatal> BaseDaemon: ########################################\r\n[dell9510] 2023.03.08 15:38:24.267509 [ 687522 ] <Fatal> BaseDaemon: (version 23.3.1.2537, build id: 70B15AC8648C7B6B98FA15EFE80C934F361370D2) (from thread 687205) (query_id: ffb50b8f-4fb0-40bb-ac55-df2dc3e2b16d) (query: SELECT id % 255, toTypeName(d.id) FROM (SELECT toLowCardinality(1048577) AS id, toLowCardinality(9223372036854775807) AS value GROUP BY GROUPING SETS ((toLowCardinality(1024)), (id % 10.0001), ((id % 2147483646) != -9223372036854775807), ((id % -1) != 255))) AS a SEMI LEFT JOIN (SELECT toLowCardinality(9223372036854775807) AS id WHERE (id % 2147483646) != NULL) AS d USING (id)) Received signal Aborted (6)\r\n[dell9510] 2023.03.08 15:38:24.267709 [ 687522 ] <Fatal> BaseDaemon: \r\n[dell9510] 2023.03.08 15:38:24.267898 [ 687522 ] <Fatal> BaseDaemon: Stack trace: 0x7f772f7b08ec 0x7f772f761ea8 0x7f772f74b53d 0x2303484c 0x230348b5 0x23034cec 0x19a05dd7 0x1b11e709 0x1fd4c5fc 0x1fd4a783 0x1fd490fd 0x19a08819 0x19a0680d 0x2a39ae0f 0x2a39ba01 0x2a39d049 0x2b0f2c51 0x2b0f2608 0x2e36f73a 0x2e5e87a8 0x2c2806fe 0x2c2737cc 0x2c26a96c 0x2c267876 0x2c35ac58 0x2c35b7fc 0x2c88d0f7 0x2c888a9b 0x2de4b24d 0x2de5dbd2 0x332c5e99 0x332c66e8 0x33546ea1 0x335437ba 0x33542355 0x7f772f7aebb5 0x7f772f830d90\r\n[dell9510] 2023.03.08 15:38:24.268093 [ 687522 ] <Fatal> BaseDaemon: 4. ? @ 0x7f772f7b08ec in ?\r\n[dell9510] 2023.03.08 15:38:24.268234 [ 687522 ] <Fatal> BaseDaemon: 5. raise @ 0x7f772f761ea8 in ?\r\n[dell9510] 2023.03.08 15:38:24.268412 [ 687522 ] <Fatal> BaseDaemon: 6. abort @ 0x7f772f74b53d in ?\r\n[dell9510] 2023.03.08 15:38:24.341671 [ 687522 ] <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:41: DB::abortOnFailedAssertion(String const&) @ 0x2303484c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:24.413843 [ 687522 ] <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:64: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x230348b5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:24.476191 [ 687522 ] <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:92: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x23034cec in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:24.548421 [ 687522 ] <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:55: DB::Exception::Exception(String&&, int, bool) @ 0x19a05dd7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:24.926032 [ 687522 ] <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:82: DB::Exception::Exception<String, String const&, String, String const&, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String const&>::type, std::type_identity<String>::type, std::type_identity<String const&>::type, std::type_identity<String>::type>, String&&, String const&, String&&, String const&, String&&) @ 0x1b11e709 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:25.394235 [ 687522 ] <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Functions/FunctionBinaryArithmetic.h:1837: DB::FunctionBinaryArithmetic<DB::ModuloImpl, DB::NameModulo, false, true, false>::executeImpl2(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) const @ 0x1fd4c5fc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:26.384509 [ 687522 ] <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/Functions/FunctionBinaryArithmetic.h:0: DB::FunctionBinaryArithmetic<DB::ModuloImpl, DB::NameModulo, false, true, false>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x1fd4a783 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:27.476610 [ 687522 ] <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Functions/FunctionBinaryArithmetic.h:0: DB::FunctionBinaryArithmeticWithConstants<DB::ModuloImpl, DB::NameModulo, false, true, false>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x1fd490fd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:27.563563 [ 687522 ] <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Functions/IFunction.h:414: DB::IFunction::executeImplDryRun(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x19a08819 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:27.651453 [ 687522 ] <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/Functions/IFunctionAdaptors.h:26: DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x19a0680d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:27.724521 [ 687522 ] <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Functions/IFunction.cpp:247: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x2a39ae0f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:27.794065 [ 687522 ] <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Functions/IFunction.cpp:282: DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x2a39ba01 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:27.861171 [ 687522 ] <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Functions/IFunction.cpp:376: DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x2a39d049 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:29.030095 [ 687522 ] <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Interpreters/ActionsDAG.cpp:516: DB::executeActionForHeader(DB::ActionsDAG::Node const*, std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>>) @ 0x2b0f2c51 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:29.708069 [ 687522 ] <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Interpreters/ActionsDAG.cpp:633: DB::ActionsDAG::updateHeader(DB::Block) const @ 0x2b0f2608 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:29.751933 [ 687522 ] <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/ExpressionTransform.cpp:8: DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) @ 0x2e36f73a in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:29.841851 [ 687522 ] <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/src/Processors/QueryPlan/ExpressionStep.cpp:32: DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&) @ 0x2e5e87a8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:30.343756 [ 687522 ] <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714: std::__unique_if<DB::ExpressionStep>::__unique_single std::make_unique[abi:v15000]<DB::ExpressionStep, DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&>(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&) @ 0x2c2806fe in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:30.724641 [ 687522 ] <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/src/Interpreters/InterpreterSelectQuery.cpp:2671: DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::shared_ptr<DB::ActionsDAG> const&, String const&) @ 0x2c2737cc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:31.094562 [ 687522 ] <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/src/Interpreters/InterpreterSelectQuery.cpp:1632: DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x2c26a96c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:31.458915 [ 687522 ] <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/src/Interpreters/InterpreterSelectQuery.cpp:786: DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x2c267876 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:31.611916 [ 687522 ] <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/src/Interpreters/InterpreterSelectWithUnionQuery.cpp:308: DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x2c35ac58 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:31.769888 [ 687522 ] <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/src/Interpreters/InterpreterSelectWithUnionQuery.cpp:382: DB::InterpreterSelectWithUnionQuery::execute() @ 0x2c35b7fc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:31.964792 [ 687522 ] <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/src/Interpreters/executeQuery.cpp:750: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x2c88d0f7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.171746 [ 687522 ] <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/src/Interpreters/executeQuery.cpp:1198: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x2c888a9b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.379520 [ 687522 ] <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/src/Server/TCPHandler.cpp:413: DB::TCPHandler::runImpl() @ 0x2de4b24d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.612198 [ 687522 ] <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/src/Server/TCPHandler.cpp:1997: DB::TCPHandler::run() @ 0x2de5dbd2 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.643756 [ 687522 ] <Fatal> BaseDaemon: 34. /home/tavplubix/ch/ClickHouse/base/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x332c5e99 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.669496 [ 687522 ] <Fatal> BaseDaemon: 35. /home/tavplubix/ch/ClickHouse/base/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x332c66e8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.701959 [ 687522 ] <Fatal> BaseDaemon: 36. /home/tavplubix/ch/ClickHouse/base/poco/Foundation/src/ThreadPool.cpp:188: Poco::PooledThread::run() @ 0x33546ea1 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.725986 [ 687522 ] <Fatal> BaseDaemon: 37. /home/tavplubix/ch/ClickHouse/base/poco/Foundation/src/Thread.cpp:46: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x335437ba in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.747434 [ 687522 ] <Fatal> BaseDaemon: 38. /home/tavplubix/ch/ClickHouse/base/poco/Foundation/src/Thread_POSIX.cpp:335: Poco::ThreadImpl::runnableEntry(void*) @ 0x33542355 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n[dell9510] 2023.03.08 15:38:32.747608 [ 687522 ] <Fatal> BaseDaemon: 39. ? @ 0x7f772f7aebb5 in ?\r\n[dell9510] 2023.03.08 15:38:32.747743 [ 687522 ] <Fatal> BaseDaemon: 40. ? @ 0x7f772f830d90 in ?\r\n[dell9510] 2023.03.08 15:38:32.747912 [ 687522 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.\r\nException on client:\r\nCode: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF)\r\n\r\n```\r\n\r\ncc: @vdimir " | https://github.com/ClickHouse/ClickHouse/issues/47366 | https://github.com/ClickHouse/ClickHouse/pull/57200 | 8325d04313955510dc1267f6c9d0a64263e2e7c3 | ef9670b5deaa6913148742db50a7ed0db5adf1d1 | "2023-03-08T14:40:09Z" | c++ | "2023-11-25T02:00:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,332 | ["src/Storages/AlterCommands.cpp", "src/Storages/ColumnsDescription.cpp", "src/Storages/ColumnsDescription.h", "tests/queries/0_stateless/02725_alias_columns_should_not_allow_compression_codec.reference", "tests/queries/0_stateless/02725_alias_columns_should_not_allow_compression_codec.sql"] | Alias columns should not allow codec | **Column ALIAS should not allow codec**
Alias column (as a non-materialized one) should probably not allow adding codec
**How to reproduce**
* server version: ClickHouse server version 22.12.3 revision 54461.
* client: ClickHouse client version 22.12.3.5 (official build).
* Non-default settings: seems none relevant
* Sample data - unnecessary
* Queries to run:
```
create database if not exists tmp;
create table if not exists tmp.alias_column_should_not_allow_compression ( user_id UUID, user_id_hashed ALIAS (cityHash64(user_id)))
engine MergeTree
partition by tuple()
order by tuple()
;
/* -- generate alters
select --name, type,
'alter table '|| database || '.' || table || ' modify column `' || name || '` ' || type || ' codec(LZ4HC(1));'
from system.columns where database = 'tmp' and table = 'alias_column_should_not_allow_compression' and compression_codec = ''
;
*/
alter table tmp.alias_column_should_not_allow_compression modify column `user_id_hashed` UInt64 codec(LZ4HC(1));
-- performed ok, though I understand that it makes no sense for alias - missed in bulk operation and no possibility to override on table's MergeTree settings level for default
detach table tmp.alias_column_should_not_allow_compression;
attach table tmp.alias_column_should_not_allow_compression;
-- throws error (same for server restart - loops daemon startup)
-- Received exception from server (version 22.12.3):
-- Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Cannot specify codec for column type ALIAS. (BAD_ARGUMENTS)
```
**Expected behavior**
I suppose best here - is to throw on alter attempt (but disregard of codec on attach/operation of alias column is an option as well)
---
**Additional context**
```-- also related minor issues:
-- 1 - how to drop table in such situation? or remove replica?
drop table tmp.alias_column_should_not_allow_compression;
-- returns
-- Received exception from server (version 22.12.3):
-- Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table tmp.alias_column_should_not_allow_compression doesn't exist. (UNKNOWN_TABLE)
create or replace table tmp.alias_column_should_not_allow_compression ( user_id UUID, user_id_hashed ALIAS (cityHash64(user_id)))
engine MergeTree
partition by tuple()
order by tuple()
;
-- returns:
-- Received exception from server (version 22.12.3):
-- Code: 57. DB::Exception: Received from localhost:9000. DB::Exception: Table `tmp`.`alias_column_should_not_allow_compression` already exists (detached). (TABLE_ALREADY_EXISTS)
replace table tmp.alias_column_should_not_allow_compression ( user_id UUID, user_id_hashed ALIAS (cityHash64(user_id)))
engine MergeTree
partition by tuple()
order by tuple()
;
-- returns:
-- Received exception from server (version 22.12.3):
-- Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table `tmp`.`alias_column_should_not_allow_compression` doesn't exist. (UNKNOWN_TABLE)
-- *if there is a way do it by SQL - please let me know (we resolved only via manipulations with ZK and afterwadrs restore of replica)
-- 2 - exception readability
create table if not exists tmp.alias_column_should_not_allow_compression_2 ( user_id UUID, user_id_hashed ALIAS (cityHash64(user_id)))
engine MergeTree
partition by tuple()
order by tuple()
;
alter table tmp.alias_column_should_not_allow_compression_2 modify column if exists `user_id_hashed` remove CODEC;
-- if column has no codec - throws: ...doesn't have TTL ... - may be would be better readable if parametrized like ...doesn't have {item_property_to_remove}...
-- Received exception from server (version 22.12.3):
-- Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Column `user_id_hashed` doesn't have TTL, cannot remove it. (BAD_ARGUMENTS)
```
| https://github.com/ClickHouse/ClickHouse/issues/47332 | https://github.com/ClickHouse/ClickHouse/pull/49363 | 03aa4f7f8a1d1120e48f9a53a7027236e7058a40 | ee9fae6aa219d257e59821a7c0603bb6bc1ce89b | "2023-03-08T10:52:51Z" | c++ | "2023-05-03T00:37:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,317 | ["src/Interpreters/ActionsDAG.cpp", "tests/queries/0_stateless/01655_plan_optimizations.reference", "tests/queries/0_stateless/01655_plan_optimizations.sh", "tests/queries/0_stateless/02568_and_consistency.reference", "tests/queries/0_stateless/02568_and_consistency.sql"] | HAVING returning no result | ```
CREATE TABLE t1 (c0 Int32, PRIMARY KEY (c0)) ENGINE=MergeTree;
INSERT INTO t1 VALUES (1554690688);
```
```
SELECT MIN(t1.c0)
FROM t1
GROUP BY
(-sign(cos(t1.c0))) * (-max2(t1.c0, t1.c0 / t1.c0)),
t1.c0 * t1.c0,
sign(-exp(-t1.c0))
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0
Query id: 53d13f67-8069-48ce-b614-6c04da2dba07
βββββMIN(c0)ββ
β 1554690688 β
ββββββββββββββ
1 row in set. Elapsed: 0.035 sec.
```
Original report:
```
SELECT MIN(t1.c0)
FROM t1
GROUP BY
(-sign(cos(t1.c0))) * (-max2(t1.c0, t1.c0 / t1.c0)),
t1.c0 * t1.c0,
sign(-exp(-t1.c0))
HAVING -(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL)
UNION ALL
SELECT MIN(t1.c0)
FROM t1
GROUP BY
(-sign(cos(t1.c0))) * (-max2(t1.c0, t1.c0 / t1.c0)),
t1.c0 * t1.c0,
sign(-exp(-t1.c0))
HAVING NOT (-(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL))
UNION ALL
SELECT MIN(t1.c0)
FROM t1
GROUP BY
(-sign(cos(t1.c0))) * (-max2(t1.c0, t1.c0 / t1.c0)),
t1.c0 * t1.c0,
sign(-exp(-t1.c0))
HAVING (-(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL)) IS NULL
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0
Query id: b229fc66-42a6-4240-8374-48a3b5c5d8b5
Ok.
0 rows in set. Elapsed: 0.143 sec.
```
Simplified bug:
`-(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL)` is 1 when computed outside HAVING, when we add it there it returns nothing:
```
SELECT
MIN(t1.c0),
-(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL) AS f
FROM t1
GROUP BY c0
HAVING -(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL)
0 rows in set. Elapsed: 0.040 sec.
```
If I just add `true AND` in HAVING it is fixed
```
SELECT
MIN(t1.c0),
-(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL) AS f
FROM t1
GROUP BY c0
HAVING true AND -(-(MIN(t1.c0) + MIN(t1.c0))) AND (pow('{b' > '-657301241', log(-1004522121)) IS NOT NULL)
βββββMIN(c0)ββ¬βfββ
β 1554690688 β 1 β
ββββββββββββββ΄ββββ
1 row in set. Elapsed: 0.043 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/47317 | https://github.com/ClickHouse/ClickHouse/pull/47584 | e4500a7f2ab82d4bab2c3f9d931c202e590d6907 | 9bf6175919406ca9cfec4a4ec4c3f6ad5f9d83c4 | "2023-03-07T20:46:56Z" | c++ | "2023-05-11T12:03:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,311 | ["src/Interpreters/ThreadStatusExt.cpp", "src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/02572_query_views_log_background_thread.reference", "tests/queries/0_stateless/02572_query_views_log_background_thread.sql"] | `query_views_log` does not get populated when using a Materialized View attached to a Kafka table | **Describe the unexpected behaviour**
Using a simple pipeline:
| Kafka engine table | ---> | Materialized View | ---> | Destination table |
- data is flowing correctly but `system.query_views_log` is not updated accordingly.
- `query_views_log` is correctly configured and working with other MVs.
Versions tested 22.8 and 23.2
**How to reproduce**
```sql
SELECT
name,
value
FROM system.settings
WHERE name = 'log_queries'
ββnameβββββββββ¬βvalueββ
β log_queries β 1 β
βββββββββββββββ΄ββββββββ
CREATE TABLE IF NOT EXISTS default.test_to
(
`key` String,
`value` UInt8
)
ENGINE = MergeTree
ORDER BY key
CREATE TABLE IF NOT EXISTS default.test_kafka
(
`key` String,
`value` UInt8
)
ENGINE = Kafka
SETTINGS
kafka_broker_list = 'localhost:9092',
kafka_topic_list = 'test1',
kafka_group_name = 'group1',
kafka_format = 'JSONEachRow',
kafka_num_consumers = 1,
kafka_thread_per_consumer = 0
CREATE MATERIALIZED VIEW IF NOT EXISTS default.test_mv TO test_to
AS
SELECT *
FROM default.test_kafka SETTINGS log_query_views = 1;
INSERT INTO test_kafka FORMAT JSONEachRow {"key": "key1", "value": 1};
SELECT * FROM system.query_views_log
0 rows in set. Elapsed: 0.002 sec.
```
**Expected behavior**
There should be an entry in `query_views_log`, so it seems when using a Kafka table engine and a MV is attached to it the `query_views_log` does not get populated.
| https://github.com/ClickHouse/ClickHouse/issues/47311 | https://github.com/ClickHouse/ClickHouse/pull/46668 | 715ef4da6e0c9bc4f8c8e36c61d92080725789ce | 0503ed6fb826c566ce1f505bb2629905ad2e18ce | "2023-03-07T16:25:16Z" | c++ | "2023-04-07T12:47:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,287 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Storages/SelectQueryInfo.h", "src/Storages/StorageView.cpp", "src/Storages/StorageView.h", "tests/queries/0_stateless/02428_parameterized_view.reference", "tests/queries/0_stateless/02428_parameterized_view.sh"] | Parameterized View incorrect substitution when parameter name is substring of column name | When the name of the parameter (in this case `i`) is present (substring) in the column being filtered related to the create view, there is an incorrect parsing of the string substitution when executing the select statement
For example ( Fiddle -> https://fiddle.clickhouse.com/8e42a8f4-d1aa-4808-a40c-38e25f837aba )
```
create table testid ( id UInt64 ) engine Memory;
insert into testid values (1),(2);
create view test_view as select * from testid where id = {i:UInt64};
select * from test_view (i=2);
```
This would result in the following error
```
Code: 10. DB::Exception: Column `i)` not found in table default.test_view (5e34b78d-0f40-45ee-8e91-a6c92530017f). (NOT_FOUND_COLUMN_IN_BLOCK) (version 23.1.3.5 (official build))
```
However, this works (https://fiddle.clickhouse.com/946a328a-8491-436e-ba8e-c2f9fe4a5ddf), where the parameter name was substituted to `s`
```
create table testid ( id UInt64 ) engine Memory;
insert into testid values (1),(2);
create view test_view as select * from testid where id = {s:UInt64};
select * from test_view (s=2);
```
Error occurs on fiddle too | https://github.com/ClickHouse/ClickHouse/issues/47287 | https://github.com/ClickHouse/ClickHouse/pull/47495 | 95e994a9cda18963a557648f2bca5a41dcf1f274 | 52b69768221a94c3b513c6408994889e246d11ba | "2023-03-07T03:00:33Z" | c++ | "2023-03-14T13:33:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,267 | ["docs/en/operations/settings/settings.md"] | missing docs for connect_timeout_with_failover_secure_ms | (you don't have to strictly follow this form)
**Describe the issue**
A clear and concise description of what's wrong in documentation.
**Additional context**
Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/47267 | https://github.com/ClickHouse/ClickHouse/pull/49751 | 9d3d2cf0a8ca7d712564eabc6304d38f654afc04 | 10bc3438ebabb69ba57fe4f696cf2ffe039ddd35 | "2023-03-06T14:47:05Z" | c++ | "2023-05-10T19:46:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,247 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Storages/SelectQueryInfo.h", "src/Storages/StorageView.cpp", "src/Storages/StorageView.h", "tests/queries/0_stateless/02428_parameterized_view.reference", "tests/queries/0_stateless/02428_parameterized_view.sh"] | parameterized view leads to segmentation fault |
I upgraded to Clickhouse 23.2.2.20 to use new parameterized views. Under load I began to get segmentation fault error. Here are few stack traces from logs:
2023.03.04 20:57:50.473964 [ 424295 ] {} <Fatal> BaseDaemon: ########################################
2023.03.04 20:57:50.474035 [ 424295 ] {} <Fatal> BaseDaemon: (version 23.2.2.20 (official build), build id: 57761E6B1331F0CE4077DBCE5D86650F468ADFC6) (from thread 423454) (query_id: 46d67d28-ba7d-4102-a21f-60f9b9e666e5) (query: select product_id, leftover, orders, revenue from sales_report_totals(period=30, product_ids=[144486409,98884628,141567227,116511878,43716695,76559806,88018253,119506222,99997170,18882454,50276048,55097558,124337980,90987422,122206817,69567361,144035763,117942231,85856468,91785664,73097306,92092946,88781232,44885201,11888676,91785005,79124677,96510944,96478068,122168592,16332835,92092945,92069133,143280703,76456581,143276231,109073434,143276230,143274482,122166903,99568115,74294477,122168593,134290291,99547969,122166904,92069132,132480052,16924603,98564728]) FORMAT JSONEachRow) Received signal Segmentation fault (11)
2023.03.04 20:57:50.474069 [ 424295 ] {} <Fatal> BaseDaemon: Address: 0x76 Access: write. Address not mapped to object.
2023.03.04 20:57:50.474090 [ 424295 ] {} <Fatal> BaseDaemon: Stack trace: 0x120c219e 0x120c1edf 0x143e1591 0x14b18080 0x14af53a5 0x12862561 0x12864cba 0x12869d5b 0x13981437 0x146aa356 0x146ae52c 0x1471bafd 0x17613dd4 0x17614ffb 0x1779c387 0x17799dbd 0x7fde1986fb43 0x7fde19901a00
2023.03.04 20:57:50.474160 [ 424295 ] {} <Fatal> BaseDaemon: 2. ? @ 0x120c219e in /usr/bin/clickhouse
2023.03.04 20:57:50.474169 [ 424295 ] {} <Fatal> BaseDaemon: 3. ? @ 0x120c1edf in /usr/bin/clickhouse
2023.03.04 20:57:50.474736 [ 424295 ] {} <Fatal> BaseDaemon: 4. DB::IMergeTreeSelectAlgorithm::~IMergeTreeSelectAlgorithm() @ 0x143e1591 in /usr/bin/clickhouse
2023.03.04 20:57:50.475249 [ 424295 ] {} <Fatal> BaseDaemon: 5. DB::MergeTreeThreadSelectAlgorithm::~MergeTreeThreadSelectAlgorithm() @ 0x14b18080 in /usr/bin/clickhouse
2023.03.04 20:57:50.475265 [ 424295 ] {} <Fatal> BaseDaemon: 6. ? @ 0x14af53a5 in /usr/bin/clickhouse
2023.03.04 20:57:50.475276 [ 424295 ] {} <Fatal> BaseDaemon: 7. ? @ 0x12862561 in /usr/bin/clickhouse
2023.03.04 20:57:50.476408 [ 424295 ] {} <Fatal> BaseDaemon: 8. DB::QueryPipeline::~QueryPipeline() @ 0x12864cba in /usr/bin/clickhouse
2023.03.04 20:57:50.476431 [ 424295 ] {} <Fatal> BaseDaemon: 9. DB::QueryPipeline::reset() @ 0x12869d5b in /usr/bin/clickhouse
2023.03.04 20:57:50.476464 [ 424295 ] {} <Fatal> BaseDaemon: 10. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1:
:basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)>, std::__1::opt
ional<DB::FormatSettings> const&) @ 0x13981437 in /usr/bin/clickhouse
2023.03.04 20:57:50.476491 [ 424295 ] {} <Fatal> BaseDaemon: 11. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x146aa356 in /usr/bin/clickhouse
2023.03.04 20:57:50.476501 [ 424295 ] {} <Fatal> BaseDaemon: 12. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x146ae52c in /usr/bin/clickhouse
2023.03.04 20:57:50.476511 [ 424295 ] {} <Fatal> BaseDaemon: 13. DB::HTTPServerConnection::run() @ 0x1471bafd in /usr/bin/clickhouse
2023.03.04 20:57:50.476523 [ 424295 ] {} <Fatal> BaseDaemon: 14. Poco::Net::TCPServerConnection::start() @ 0x17613dd4 in /usr/bin/clickhouse
2023.03.04 20:57:50.476534 [ 424295 ] {} <Fatal> BaseDaemon: 15. Poco::Net::TCPServerDispatcher::run() @ 0x17614ffb in /usr/bin/clickhouse
2023.03.04 20:57:50.476546 [ 424295 ] {} <Fatal> BaseDaemon: 16. Poco::PooledThread::run() @ 0x1779c387 in /usr/bin/clickhouse
2023.03.04 20:57:50.476557 [ 424295 ] {} <Fatal> BaseDaemon: 17. Poco::ThreadImpl::runnableEntry(void*) @ 0x17799dbd in /usr/bin/clickhouse
2023.03.04 20:57:50.476568 [ 424295 ] {} <Fatal> BaseDaemon: 18. ? @ 0x7fde1986fb43 in ?
2023.03.04 20:57:50.476578 [ 424295 ] {} <Fatal> BaseDaemon: 19. ? @ 0x7fde19901a00 in ?
2023.03.04 20:57:50.705000 [ 424295 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: ED79FE55CE2580735E8FEB79C7194715)
2023.03.04 21:35:17.610452 [ 701305 ] {} <Fatal> BaseDaemon: ########################################
2023.03.04 21:35:17.610509 [ 701305 ] {} <Fatal> BaseDaemon: (version 23.2.2.20 (official build), build id: 57761E6B1331F0CE4077DBCE5D86650F468ADFC6) (from thread 424528) (query_id: b13741f9-54ae-4673-9fa5-585ef61ca458) (query: select product_id, leftover, orders, revenue from sales_report_totals(period=30, product_ids=[28336563,4193773,4193772,122026531,4193774,35506005,34860443,33533869,73224751,34864599]) FORMAT JSONEachRow) Received signal Segmentation fault (11)
2023.03.04 21:35:17.610533 [ 701305 ] {} <Fatal> BaseDaemon: Address: 0x30 Access: read. Address not mapped to object.
2023.03.04 21:35:17.610552 [ 701305 ] {} <Fatal> BaseDaemon: Stack trace: 0x175376d0 0x1753d470 0x17538993 0x17575769 0x1752c75d 0x1752c3c5 0x175b0cb8 0x17509b2e 0xe09460c 0x1475db77 0x1475b771 0x14755949 0x14757c9e 0xe194dea 0xe19a4a1 0x7f7d54925b43 0x7f7d549b7a00
2023.03.04 21:35:17.610607 [ 701305 ] {} <Fatal> BaseDaemon: 2. edata_heap_first @ 0x175376d0 in /usr/bin/clickhouse
2023.03.04 21:35:17.610624 [ 701305 ] {} <Fatal> BaseDaemon: 3. eset_fit @ 0x1753d470 in /usr/bin/clickhouse
2023.03.04 21:35:17.610648 [ 701305 ] {} <Fatal> BaseDaemon: 4. ? @ 0x17538993 in /usr/bin/clickhouse
2023.03.04 21:35:17.610659 [ 701305 ] {} <Fatal> BaseDaemon: 5. ? @ 0x17575769 in /usr/bin/clickhouse
2023.03.04 21:35:17.610672 [ 701305 ] {} <Fatal> BaseDaemon: 6. ? @ 0x1752c75d in /usr/bin/clickhouse
2023.03.04 21:35:17.610686 [ 701305 ] {} <Fatal> BaseDaemon: 7. arena_cache_bin_fill_small @ 0x1752c3c5 in /usr/bin/clickhouse
2023.03.04 21:35:17.610698 [ 701305 ] {} <Fatal> BaseDaemon: 8. tcache_alloc_small_hard @ 0x175b0cb8 in /usr/bin/clickhouse
2023.03.04 21:35:17.610708 [ 701305 ] {} <Fatal> BaseDaemon: 9. malloc_default @ 0x17509b2e in /usr/bin/clickhouse
2023.03.04 21:35:17.610746 [ 701305 ] {} <Fatal> BaseDaemon: 10. operator new(unsigned long) @ 0xe09460c in /usr/bin/clickhouse
2023.03.04 21:35:17.610758 [ 701305 ] {} <Fatal> BaseDaemon: 11. ? @ 0x1475db77 in /usr/bin/clickhouse
2023.03.04 21:35:17.610779 [ 701305 ] {} <Fatal> BaseDaemon: 12. DB::ExecutingGraph::updateNode(unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*>>>&, std::__1::queue<DB::ExecutingGraph::N
ode*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*>>>&) @ 0x1475b771 in /usr/bin/clickhouse
2023.03.04 21:35:17.610792 [ 701305 ] {} <Fatal> BaseDaemon: 13. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x14755949 in /usr/bin/clickhouse
2023.03.04 21:35:17.610802 [ 701305 ] {} <Fatal> BaseDaemon: 14. ? @ 0x14757c9e in /usr/bin/clickhouse
2023.03.04 21:35:17.610816 [ 701305 ] {} <Fatal> BaseDaemon: 15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe194dea in /usr/bin/clickhouse
2023.03.04 21:35:17.610828 [ 701305 ] {} <Fatal> BaseDaemon: 16. ? @ 0xe19a4a1 in /usr/bin/clickhouse
2023.03.04 21:35:17.610839 [ 701305 ] {} <Fatal> BaseDaemon: 17. ? @ 0x7f7d54925b43 in ?
2023.03.04 21:35:17.610848 [ 701305 ] {} <Fatal> BaseDaemon: 18. ? @ 0x7f7d549b7a00 in ?
2023.03.04 21:35:17.775280 [ 701305 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: ED79FE55CE2580735E8FEB79C7194715)
2023.03.04 21:35:38.055982 [ 424304 ] {} <Fatal> Application: Child process was terminated by signal 11.
2023.03.04 21:48:53.457920 [ 962436 ] {} <Fatal> BaseDaemon: ########################################
2023.03.04 21:48:53.457973 [ 962436 ] {} <Fatal> BaseDaemon: (version 23.2.2.20 (official build), build id: 57761E6B1331F0CE4077DBCE5D86650F468ADFC6) (from thread 736536) (query_id: 6b294514-06e6-4468-be63-5ffc6186ab82) (query: select leftover, orders, revenue from sales_report_totals_add(period=30, product_ids=[59117808]) FORMAT JSONEachRow) Received signal Segmentation fault (11)
2023.03.04 21:48:53.457992 [ 962436 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.03.04 21:48:53.458026 [ 962436 ] {} <Fatal> BaseDaemon: Stack trace: 0xe1fe32b 0xe1fdb18 0xe1fd762 0xe1fd640 0x135c9e65 0x135c6759 0x13658602 0x136565aa 0x13580110 0x1397aa00 0x13980c72 0x146aa356 0x146ae52c 0x1471bafd 0x17613dd4 0x17614ffb 0x1779c387 0x17799dbd 0x7f7c95ec9b43 0x
7f7c95f5ba00
2023.03.04 21:48:53.458023 [ 962437 ] {} <Fatal> BaseDaemon: ########################################
2023.03.04 21:48:53.458089 [ 962436 ] {} <Fatal> BaseDaemon: 2. ? @ 0xe1fe32b in /usr/bin/clickhouse
2023.03.04 21:48:53.458088 [ 962437 ] {} <Fatal> BaseDaemon: (version 23.2.2.20 (official build), build id: 57761E6B1331F0CE4077DBCE5D86650F468ADFC6) (from thread 799125) (query_id: bde7ae2f-0c53-4e6a-a0c5-c82662edf70a) (query: select leftover, orders, revenue from sales_report_totals_add(period=30, product_ids=[13966401]) FORMAT JSONEachRow) Received signal Segmentation fault (11)
2023.03.04 21:48:53.458124 [ 962436 ] {} <Fatal> BaseDaemon: 3. ? @ 0xe1fdb18 in /usr/bin/clickhouse
2023.03.04 21:48:53.458142 [ 962437 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2023.03.04 21:48:53.458148 [ 962436 ] {} <Fatal> BaseDaemon: 4. ? @ 0xe1fd762 in /usr/bin/clickhouse
2023.03.04 21:48:53.458164 [ 962436 ] {} <Fatal> BaseDaemon: 5. ? @ 0xe1fd640 in /usr/bin/clickhouse
2023.03.04 21:48:53.458171 [ 962437 ] {} <Fatal> BaseDaemon: Stack trace: 0xe1fe32b 0xe1fdb18 0xe1fd762 0xe1fd640 0x135c9e65 0x135c6759 0x13658602 0x136565aa 0x13580110 0x1397aa00 0x13980c72 0x146aa356 0x146ae52c 0x1471bafd 0x17613dd4 0x17614ffb 0x1779c387 0x17799dbd 0x7f7c95ec9b43 0x
7f7c95f5ba00
2023.03.04 21:48:53.458178 [ 962436 ] {} <Fatal> BaseDaemon: 6. ? @ 0x135c9e65 in /usr/bin/clickhouse
2023.03.04 21:48:53.458234 [ 962437 ] {} <Fatal> BaseDaemon: 2. ? @ 0xe1fe32b in /usr/bin/clickhouse
2023.03.04 21:48:53.458250 [ 962437 ] {} <Fatal> BaseDaemon: 3. ? @ 0xe1fdb18 in /usr/bin/clickhouse
2023.03.04 21:48:53.458257 [ 962436 ] {} <Fatal> BaseDaemon: 7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryO
ptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata con
st> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135c6759 in /usr/bin/clickhouse
2023.03.04 21:48:53.458263 [ 962437 ] {} <Fatal> BaseDaemon: 4. ? @ 0xe1fd762 in /usr/bin/clickhouse
2023.03.04 21:48:53.458284 [ 962436 ] {} <Fatal> BaseDaemon: 8. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::all
ocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x13658602 in /usr/bin/clickhouse
2023.03.04 21:48:53.458289 [ 962437 ] {} <Fatal> BaseDaemon: 5. ? @ 0xe1fd640 in /usr/bin/clickhouse
2023.03.04 21:48:53.458313 [ 962436 ] {} <Fatal> BaseDaemon: 9. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, s
td::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136565aa in /usr/bin/clickhouse
2023.03.04 21:48:53.458323 [ 962437 ] {} <Fatal> BaseDaemon: 6. ? @ 0x135c9e65 in /usr/bin/clickhouse
2023.03.04 21:48:53.458332 [ 962436 ] {} <Fatal> BaseDaemon: 10. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x13580110 in /usr/bin/clickhouse
2023.03.04 21:48:53.458346 [ 962436 ] {} <Fatal> BaseDaemon: 11. ? @ 0x1397aa00 in /usr/bin/clickhouse
2023.03.04 21:48:53.458365 [ 962436 ] {} <Fatal> BaseDaemon: 12. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1:
:basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)>, std::__1::opt
ional<DB::FormatSettings> const&) @ 0x13980c72 in /usr/bin/clickhouse
2023.03.04 21:48:53.458369 [ 962437 ] {} <Fatal> BaseDaemon: 7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryO
ptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata con
st> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135c6759 in /usr/bin/clickhouse
2023.03.04 21:48:53.458382 [ 962436 ] {} <Fatal> BaseDaemon: 13. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x146aa356 in /usr/bin/clickhouse
2023.03.04 21:48:53.458393 [ 962437 ] {} <Fatal> BaseDaemon: 8. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::all
ocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x13658602 in /usr/bin/clickhouse
2023.03.04 21:48:53.458409 [ 962436 ] {} <Fatal> BaseDaemon: 14. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x146ae52c in /usr/bin/clickhouse
2023.03.04 21:48:53.458419 [ 962437 ] {} <Fatal> BaseDaemon: 9. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, s
td::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136565aa in /usr/bin/clickhouse
2023.03.04 21:48:53.458426 [ 962436 ] {} <Fatal> BaseDaemon: 15. DB::HTTPServerConnection::run() @ 0x1471bafd in /usr/bin/clickhouse
2023.03.04 21:48:53.458439 [ 962437 ] {} <Fatal> BaseDaemon: 10. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x13580110 in /usr/bin/clickhouse
2023.03.04 21:48:53.458458 [ 962436 ] {} <Fatal> BaseDaemon: 16. Poco::Net::TCPServerConnection::start() @ 0x17613dd4 in /usr/bin/clickhouse
2023.03.04 21:48:53.458469 [ 962437 ] {} <Fatal> BaseDaemon: 11. ? @ 0x1397aa00 in /usr/bin/clickhouse
2023.03.04 21:48:53.458474 [ 962436 ] {} <Fatal> BaseDaemon: 17. Poco::Net::TCPServerDispatcher::run() @ 0x17614ffb in /usr/bin/clickhouse
2023.03.04 21:48:53.458486 [ 962436 ] {} <Fatal> BaseDaemon: 18. Poco::PooledThread::run() @ 0x1779c387 in /usr/bin/clickhouse
2023.03.04 21:48:53.458491 [ 962437 ] {} <Fatal> BaseDaemon: 12. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1:
:basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)>, std::__1::opt
ional<DB::FormatSettings> const&) @ 0x13980c72 in /usr/bin/clickhouse
2023.03.04 21:48:53.458496 [ 962436 ] {} <Fatal> BaseDaemon: 19. Poco::ThreadImpl::runnableEntry(void*) @ 0x17799dbd in /usr/bin/clickhouse
2023.03.04 21:48:53.458515 [ 962437 ] {} <Fatal> BaseDaemon: 13. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x146aa356 in /usr/bin/clickhouse
2023.03.04 21:48:53.458516 [ 962436 ] {} <Fatal> BaseDaemon: 20. ? @ 0x7f7c95ec9b43 in ?
2023.03.04 21:48:53.458533 [ 962437 ] {} <Fatal> BaseDaemon: 14. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x146ae52c in /usr/bin/clickhouse
2023.03.04 21:48:53.458536 [ 962436 ] {} <Fatal> BaseDaemon: 21. ? @ 0x7f7c95f5ba00 in ?
2023.03.04 21:48:53.458550 [ 962437 ] {} <Fatal> BaseDaemon: 15. DB::HTTPServerConnection::run() @ 0x1471bafd in /usr/bin/clickhouse
2023.03.04 21:48:53.458567 [ 962437 ] {} <Fatal> BaseDaemon: 16. Poco::Net::TCPServerConnection::start() @ 0x17613dd4 in /usr/bin/clickhouse
2023.03.04 21:48:53.458579 [ 962437 ] {} <Fatal> BaseDaemon: 17. Poco::Net::TCPServerDispatcher::run() @ 0x17614ffb in /usr/bin/clickhouse
2023.03.04 21:48:53.458593 [ 962437 ] {} <Fatal> BaseDaemon: 18. Poco::PooledThread::run() @ 0x1779c387 in /usr/bin/clickhouse
2023.03.04 21:48:53.458606 [ 962437 ] {} <Fatal> BaseDaemon: 19. Poco::ThreadImpl::runnableEntry(void*) @ 0x17799dbd in /usr/bin/clickhouse
2023.03.04 21:48:53.458618 [ 962437 ] {} <Fatal> BaseDaemon: 20. ? @ 0x7f7c95ec9b43 in ?
2023.03.04 21:48:53.458633 [ 962437 ] {} <Fatal> BaseDaemon: 21. ? @ 0x7f7c95f5ba00 in ?
2023.03.04 21:48:53.617207 [ 962436 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: ED79FE55CE2580735E8FEB79C7194715)
2023.03.04 21:48:53.619722 [ 962437 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: ED79FE55CE2580735E8FEB79C7194715)
2023.03.04 21:55:16.610155 [ 1085942 ] {} <Fatal> BaseDaemon: ########################################
2023.03.04 21:55:16.610193 [ 1085942 ] {} <Fatal> BaseDaemon: (version 23.2.2.20 (official build), build id: 57761E6B1331F0CE4077DBCE5D86650F468ADFC6) (from thread 1024084) (query_id: e2c95437-8071-4cb9-a2fc-0443119c823a) (query: select leftover, orders, revenue from sales_report_totals_add(period=30, product_ids=[145220388]) FORMAT JSONEachRow) Received signal Segmentation fault (11)
2023.03.04 21:55:16.610210 [ 1085942 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2023.03.04 21:55:16.610225 [ 1085942 ] {} <Fatal> BaseDaemon: Stack trace: 0x14016f49 0x13ff201b 0x14014d7c 0x135de65c 0x135d021c 0x135cf86d 0x1365b116 0x1365bf5e 0x1397b079 0x13980c72 0x146aa356 0x146ae52c 0x1471bafd 0x17613dd4 0x17614ffb 0x1779c387 0x17799dbd 0x7f6255df6b43 0x7f6255
e88a00
2023.03.04 21:55:16.610278 [ 1085942 ] {} <Fatal> BaseDaemon: 2. DB::StorageView::replaceValueWithQueryParameter(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char
>, std::__1::allocator<char>>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_tra
its<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>> const&) @ 0x14016f49 in /usr/b
in/clickhouse
2023.03.04 21:55:16.610297 [ 1085942 ] {} <Fatal> BaseDaemon: 3. DB::StorageSnapshot::getSampleBlockForColumns(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<
char>, std::__1::allocator<char>>>> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, s
td::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> co
nst, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>> const&) const @ 0x13ff201b in /usr/bin/clickhouse
2023.03.04 21:55:16.610310 [ 1085942 ] {} <Fatal> BaseDaemon: 4. DB::StorageView::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, s
td::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0x14014d7c in /usr/bin/clickhouse
2023.03.04 21:55:16.610329 [ 1085942 ] {} <Fatal> BaseDaemon: 5. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x135de65c in /usr/bin/clickhouse
2023.03.04 21:55:16.610337 [ 1085942 ] {} <Fatal> BaseDaemon: 6. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) @ 0x135d021c in /usr/bin/clickhouse
2023.03.04 21:55:16.610347 [ 1085942 ] {} <Fatal> BaseDaemon: 7. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x135cf86d in /usr/bin/clickhouse
2023.03.04 21:55:16.610356 [ 1085942 ] {} <Fatal> BaseDaemon: 8. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x1365b116 in /usr/bin/clickhouse
2023.03.04 21:55:16.610368 [ 1085942 ] {} <Fatal> BaseDaemon: 9. DB::InterpreterSelectWithUnionQuery::execute() @ 0x1365bf5e in /usr/bin/clickhouse
2023.03.04 21:55:16.610376 [ 1085942 ] {} <Fatal> BaseDaemon: 10. ? @ 0x1397b079 in /usr/bin/clickhouse
2023.03.04 21:55:16.610387 [ 1085942 ] {} <Fatal> BaseDaemon: 11. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1
::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)>, std::__1::op
tional<DB::FormatSettings> const&) @ 0x13980c72 in /usr/bin/clickhouse
2023.03.04 21:55:16.610399 [ 1085942 ] {} <Fatal> BaseDaemon: 12. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x146aa356 in /usr/bin/clickhouse
2023.03.04 21:55:16.610407 [ 1085942 ] {} <Fatal> BaseDaemon: 13. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x146ae52c in /usr/bin/clickhouse
2023.03.04 21:55:16.610415 [ 1085942 ] {} <Fatal> BaseDaemon: 14. DB::HTTPServerConnection::run() @ 0x1471bafd in /usr/bin/clickhouse
2023.03.04 21:55:16.610425 [ 1085942 ] {} <Fatal> BaseDaemon: 15. Poco::Net::TCPServerConnection::start() @ 0x17613dd4 in /usr/bin/clickhouse
2023.03.04 21:55:16.610432 [ 1085942 ] {} <Fatal> BaseDaemon: 16. Poco::Net::TCPServerDispatcher::run() @ 0x17614ffb in /usr/bin/clickhouse
2023.03.04 21:55:16.610441 [ 1085942 ] {} <Fatal> BaseDaemon: 17. Poco::PooledThread::run() @ 0x1779c387 in /usr/bin/clickhouse
2023.03.04 21:55:16.610450 [ 1085942 ] {} <Fatal> BaseDaemon: 18. Poco::ThreadImpl::runnableEntry(void*) @ 0x17799dbd in /usr/bin/clickhouse
2023.03.04 21:55:16.610458 [ 1085942 ] {} <Fatal> BaseDaemon: 19. ? @ 0x7f6255df6b43 in ?
2023.03.04 21:55:16.610464 [ 1085942 ] {} <Fatal> BaseDaemon: 20. ? @ 0x7f6255e88a00 in ?
2023.03.04 21:55:16.773203 [ 1085942 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: ED79FE55CE2580735E8FEB79C7194715)
2023.03.04 21:55:17.270175 [ 1086171 ] {} <Fatal> BaseDaemon: ########################################
2023.03.04 21:55:17.270218 [ 1086171 ] {} <Fatal> BaseDaemon: (version 23.2.2.20 (official build), build id: 57761E6B1331F0CE4077DBCE5D86650F468ADFC6) (from thread 1022496) (query_id: 4479ca34-e797-42bd-93e2-ab7098307ac3) (query: select leftover, orders, revenue from sales_report_totals_add(period=30, product_ids=[147253136]) FORMAT JSONEachRow) Received signal Segmentation fault (11)
2023.03.04 21:55:17.270238 [ 1086171 ] {} <Fatal> BaseDaemon: Address: 0x8a7e344 Access: write. Attempted access has violated the permissions assigned to the memory area.
2023.03.04 21:55:17.270267 [ 1086171 ] {} <Fatal> BaseDaemon: Stack trace: 0x89582ef 0xe1fd623 0x135c9e65 0x135c6759 0x13658602 0x136565aa 0x13580110 0x1397aa00 0x13980c72 0x146aa356 0x146ae52c 0x1471bafd 0x17613dd4 0x17614ffb 0x1779c387 0x17799dbd 0x7f6255df6b43 0x7f6255e88a00
2023.03.04 21:55:17.270307 [ 1086171 ] {} <Fatal> BaseDaemon: 2. ? @ 0x89582ef in /usr/bin/clickhouse
2023.03.04 21:55:17.270320 [ 1086171 ] {} <Fatal> BaseDaemon: 3. ? @ 0xe1fd623 in /usr/bin/clickhouse
2023.03.04 21:55:17.270334 [ 1086171 ] {} <Fatal> BaseDaemon: 4. ? @ 0x135c9e65 in /usr/bin/clickhouse
2023.03.04 21:55:17.270369 [ 1086171 ] {} <Fatal> BaseDaemon: 5. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQuery
Options const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata co
nst> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x135c6759 in /usr/bin/clickhouse
2023.03.04 21:55:17.270396 [ 1086171 ] {} <Fatal> BaseDaemon: 6. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::al
locator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x13658602 in /usr/bin/clickhouse
2023.03.04 21:55:17.270413 [ 1086171 ] {} <Fatal> BaseDaemon: 7. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char,
std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x136565aa in /usr/bin/clickhouse
2023.03.04 21:55:17.270430 [ 1086171 ] {} <Fatal> BaseDaemon: 8. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x13580110 in /usr/bin/clickhouse
2023.03.04 21:55:17.270442 [ 1086171 ] {} <Fatal> BaseDaemon: 9. ? @ 0x1397aa00 in /usr/bin/clickhouse
2023.03.04 21:55:17.270460 [ 1086171 ] {} <Fatal> BaseDaemon: 10. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x13980c72 in /usr/bin/clickhouse
2023.03.04 21:55:17.270477 [ 1086171 ] {} <Fatal> BaseDaemon: 11. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x146aa356 in /usr/bin/clickhouse
2023.03.04 21:55:17.270494 [ 1086171 ] {} <Fatal> BaseDaemon: 12. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x146ae52c in /usr/bin/clickhouse
2023.03.04 21:55:17.270507 [ 1086171 ] {} <Fatal> BaseDaemon: 13. DB::HTTPServerConnection::run() @ 0x1471bafd in /usr/bin/clickhouse
2023.03.04 21:55:17.270522 [ 1086171 ] {} <Fatal> BaseDaemon: 14. Poco::Net::TCPServerConnection::start() @ 0x17613dd4 in /usr/bin/clickhouse
2023.03.04 21:55:17.270534 [ 1086171 ] {} <Fatal> BaseDaemon: 15. Poco::Net::TCPServerDispatcher::run() @ 0x17614ffb in /usr/bin/clickhouse
2023.03.04 21:55:17.270552 [ 1086171 ] {} <Fatal> BaseDaemon: 16. Poco::PooledThread::run() @ 0x1779c387 in /usr/bin/clickhouse
2023.03.04 21:55:17.270566 [ 1086171 ] {} <Fatal> BaseDaemon: 17. Poco::ThreadImpl::runnableEntry(void*) @ 0x17799dbd in /usr/bin/clickhouse
2023.03.04 21:55:17.270578 [ 1086171 ] {} <Fatal> BaseDaemon: 18. ? @ 0x7f6255df6b43 in ?
2023.03.04 21:55:17.270588 [ 1086171 ] {} <Fatal> BaseDaemon: 19. ? @ 0x7f6255e88a00 in ?
2023.03.04 21:55:17.431962 [ 1086171 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: ED79FE55CE2580735E8FEB79C7194715)
2023.03.04 21:55:36.850496 [ 978566 ] {} <Fatal> Application: Child process was terminated by signal 11.
Here are views mentioned in stack traces:
CREATE VIEW sales_report_totals
(
`product_id` UInt32,
`leftover` UInt32,
`orders` UInt32,
`revenue` UInt32
) AS
SELECT
product_id,
argMax(leftover, sale_date) AS leftover,
sum(orders) AS orders,
sum(revenue) AS revenue
FROM sale_stock_stat
WHERE (sale_date >= toDate(now() - toIntervalDay({period:UInt8}))) AND (product_id IN ({product_ids:Array(UInt32)}))
GROUP BY product_id
CREATE VIEW sales_report_totals_add
(
`product_id` UInt32,
`leftover` UInt32,
`orders` UInt32,
`revenue` UInt32
) AS
SELECT
product_id,
argMax(leftover, sale_date) + argMax(leftover_add, sale_date) AS leftover,
sum(orders) + sum(orders_add) AS orders,
sum(revenue) + sum(revenue_add) AS revenue
FROM sale_stock_stat
WHERE (sale_date >= toDate(now() - toIntervalDay({period:UInt8}))) AND (product_id IN ({product_ids:Array(UInt32)}))
GROUP BY product_id
Unfortunately I can't reproduce it by hand.
If additional info needed I'll provide it.
| https://github.com/ClickHouse/ClickHouse/issues/47247 | https://github.com/ClickHouse/ClickHouse/pull/47495 | 95e994a9cda18963a557648f2bca5a41dcf1f274 | 52b69768221a94c3b513c6408994889e246d11ba | "2023-03-05T01:09:54Z" | c++ | "2023-03-14T13:33:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,200 | ["src/Storages/MergeTree/PartMetadataManagerWithCache.cpp"] | Upgrade check: Index file `primary.cidx` is unexpectedly long (broken part) | https://s3.amazonaws.com/clickhouse-test-reports/47152/fdcbec4fee7df1bb5a51249cef96964b5245c822/stress_test__debug_.html
```
2023.03.03 01:35:43.985393 [ 221299 ] {} <Error> test_metadata_cache.check_part_metadata_cache: while loading part 201806_0_0_0_2 on path data/test_metadata_cache/check_part_metadata_cache/201806_0_0_0_2: Code: 4. DB::Exception: Index file data/test_metadata_cache/check_part_metadata_cache/201806_0_0_0_2/primary.cidx is unexpectedly long. (EXPECTED_END_OF_FILE), Stack trace (when copying this message, always include the lines below):
0. ./build_docker/../contrib/llvm-project/libcxx/include/exception:0: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int) @ 0x2326a6be in /usr/bin/clickhouse
1. ./build_docker/../src/Common/Exception.cpp:91: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x14551458 in /usr/bin/clickhouse
2. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&>(int, FormatStringHelperImpl<std::__1::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&>::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&) @ 0xcb6da79 in /usr/bin/clickhouse
3. ./build_docker/../src/Storages/MergeTree/IMergeTreeDataPart.cpp:766: DB::IMergeTreeDataPart::loadIndex() @ 0x1ecee3e0 in /usr/bin/clickhouse
4. ./build_docker/../src/Storages/MergeTree/IMergeTreeDataPart.cpp:627: DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0x1eceb34c in /usr/bin/clickhouse
5. ./build_docker/../src/Storages/MergeTree/MergeTreeData.cpp:0: DB::MergeTreeData::loadDataPart(DB::MergeTreePartInfo const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::IDisk> const&, DB::MergeTreeDataPartState, std::__1::mutex&) @ 0x1ed9a5c4 in /usr/bin/clickhouse
6. ./build_docker/../src/Storages/MergeTree/MergeTreeData.cpp:1435: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::MergeTreeData::loadDataPartsFromDisk(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, unsigned long, std::__1::queue<std::__1::vector<std::__1::shared_ptr<DB::MergeTreeData::PartLoadingTree::Node>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeData::PartLoadingTree::Node>>>, std::__1::deque<std::__1::vector<std::__1::shared_ptr<DB::MergeTreeData::PartLoadingTree::Node>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeData::PartLoadingTree::Node>>>, std::__1::allocator<std::__1::vector<std::__1::shared_ptr<DB::MergeTreeData::PartLoadingTree::Node>, std::__1::allocator<std::__1::shared_ptr<DB::MergeTreeData::PartLoadingTree::Node>>>>>>&, std::__1::shared_ptr<DB::MergeTreeSettings const> const&)::$_19, void ()>>(std::__1::__function::__policy_storage const*) @ 0x1ee22e45 in /usr/bin/clickhouse
7. ./build_docker/../base/base/wide_integer_impl.h:786: ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__1::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x14667791 in /usr/bin/clickhouse
8. ./build_docker/../src/Common/ThreadPool.cpp:0: ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()::operator()() @ 0x1466bdf8 in /usr/bin/clickhouse
9. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:717: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) @ 0x1466bd62 in /usr/bin/clickhouse
10. ./build_docker/../base/base/wide_integer_impl.h:786: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x14664331 in /usr/bin/clickhouse
11. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x14669272 in /usr/bin/clickhouse
12. __tsan_thread_start_func @ 0xc5017a7 in /usr/bin/clickhouse
13. ? @ 0x7fcacc1ce609 in ?
2023.03.03 01:35:43.986723 [ 221299 ] {} <Error> test_metadata_cache.check_part_metadata_cache: Detaching broken part /var/lib/clickhouse/disks/s3/data/test_metadata_cache/check_part_metadata_cache/201806_0_0_0_2 (size: 791.00 B). If it happened after update, it is likely because of backward incompatibility. You need to resolve this manually
```
| https://github.com/ClickHouse/ClickHouse/issues/47200 | https://github.com/ClickHouse/ClickHouse/pull/48010 | 03f24ee8aef1998db05e5b08a52e80dd8806cc14 | 5ab99ec916f2527cae1ef1a3c2453aa0e817178b | "2023-03-03T13:40:16Z" | c++ | "2023-03-26T21:16:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,185 | ["docs/en/sql-reference/functions/string-functions.md", "docs/en/sql-reference/functions/tuple-map-functions.md", "src/Functions/keyvaluepair/extractKeyValuePairs.cpp", "tests/queries/0_stateless/02499_extract_key_value_pairs_multiple_input.reference", "tests/queries/0_stateless/02499_extract_key_value_pairs_multiple_input.sql"] | Implement `string_to_map` function | Check https://docs.databricks.com/sql/language-manual/functions/str_to_map.html | https://github.com/ClickHouse/ClickHouse/issues/47185 | https://github.com/ClickHouse/ClickHouse/pull/49466 | 43a5eb0fafd68dcebe8da88638a40448bd7da38d | d8d2b0af7654e03e511f81e58a200c2bbb4f0799 | "2023-03-03T08:40:17Z" | c++ | "2023-05-08T08:11:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,159 | ["src/Core/Settings.h", "src/Storages/StorageFile.cpp", "tests/queries/0_stateless/02497_storage_file_reader_selection.reference", "tests/queries/0_stateless/02497_storage_file_reader_selection.sh"] | SIGBUS in `segmentatorThreadFunction`/`fileSegmentationEngine*` | https://s3.amazonaws.com/clickhouse-test-reports/46681/ad4a44df52b6b04ac5977d12aa35b099a792133c/upgrade_check__asan_.html
```
2023-03-02 07:45:26 Thread 1305 "Segmentator" received signal SIGBUS, Bus error.
2023-03-02 07:45:26 [Switching to Thread 0x7f933171b700 (LWP 139986)]
2023-03-02 07:45:26 detail::find_first_symbols_sse2<true, (detail::ReturnMode)0, (char)13, (char)10> (begin=<optimized out>, end=0x7f9a24a18095 "8)\003\002") at ../base/base/find_symbols.h:85
2023-03-02 07:45:26 #0 detail::find_first_symbols_sse2<true, (detail::ReturnMode)0, (char)13, (char)10> (begin=<optimized out>, end=0x7f9a24a18095 "8)\003\002") at ../base/base/find_symbols.h:85
2023-03-02 07:45:26 bit_mask = <optimized out>
2023-03-02 07:45:26 pos = 0x7f9a24a18000 "\002\anested1\anested20Array(Tuple(Array(UInt64), Map(String, UInt64)))TTuple(Tuple(Array(Array(UInt64)), Map(UInt64, Array(Tuple(UInt64, String)))), UInt8)\003\002"
2023-03-02 07:45:26 bit_mask = <optimized out>
2023-03-02 07:45:26 bytes = <optimized out>
2023-03-02 07:45:26 eq = <optimized out>
2023-03-02 07:45:26 #1 detail::find_first_symbols_dispatch<true, (detail::ReturnMode)0, (char)13, (char)10> (begin=<optimized out>, end=0x7f9a24a18095 "8)\003\002") at ../base/base/find_symbols.h:194
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #2 find_first_symbols<(char)13, (char)10> (begin=<optimized out>, end=0x7f9a24a18095 "8)\003\002") at ../base/base/find_symbols.h:211
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #3 DB::fileSegmentationEngineTabSeparatedImpl (in=..., memory=..., is_raw=<optimized out>, min_bytes=19425859, min_rows=3, max_rows=19348) at ../src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp:377
2023-03-02 07:45:26 need_more_data = true
2023-03-02 07:45:26 pos = <optimized out>
2023-03-02 07:45:26 number_of_rows = <optimized out>
2023-03-02 07:45:26 #4 0x0000000014865978 in std::__1::__function::__policy_func<std::__1::pair<bool, unsigned long> (DB::ReadBuffer&, DB::Memory<Allocator<false, false> >&, unsigned long, unsigned long)>::operator()[abi:v15000](DB::ReadBuffer&, DB::Memory<Allocator<false, false> >&, unsigned long&&, unsigned long&&) const (this=0x7f97b21eacb0, __args=<optimized out>, __args=<optimized out>, __args=<optimized out>, __args=<optimized out>) at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #5 std::__1::function<std::__1::pair<bool, unsigned long> (DB::ReadBuffer&, DB::Memory<Allocator<false, false> >&, unsigned long, unsigned long)>::operator()(DB::ReadBuffer&, DB::Memory<Allocator<false, false> >&, unsigned long, unsigned long) const (this=0x7f97b21eacb0, __arg=19348, __arg=19348, __arg=19348, __arg=19348) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #6 DB::ParallelParsingInputFormat::segmentatorThreadFunction (this=0x7f97b21eab18, thread_group=...) at ../src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp:41
2023-03-02 07:45:26 segmentator_unit_number = <optimized out>
2023-03-02 07:45:26 unit = <optimized out>
2023-03-02 07:45:26 have_more_data = <optimized out>
2023-03-02 07:45:26 currently_read_rows = <optimized out>
2023-03-02 07:45:26 scope_exit16 = {static is_nullable = <optimized out>, function = {thread_group = @0x7f9331712748}}
2023-03-02 07:45:26 #7 0x0000000014869ad6 in std::__1::__invoke[abi:v15000]<void (DB::ParallelParsingInputFormat::*&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&, std::__1::shared_ptr<DB::ThreadGroupStatus>&, void> (__f=<optimized out>, __a0=<optimized out>, __args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:359
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #8 std::__1::__apply_tuple_impl[abi:v15000]<void (DB::ParallelParsingInputFormat::*&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), std::__1::tuple<DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >&, 0ul, 1ul>(void (DB::ParallelParsingInputFormat::*&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), std::__1::tuple<DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >&, std::__1::__tuple_indices<0ul, 1ul>) (__f=<optimized out>, __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1789
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #9 std::__1::apply[abi:v15000]<void (DB::ParallelParsingInputFormat::*&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), std::__1::tuple<DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >&>(void (DB::ParallelParsingInputFormat::*&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), std::__1::tuple<DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >&) (__f=<optimized out>, __t=...) at ../contrib/llvm-project/libcxx/include/tuple:1798
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #10 ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}::operator()() (this=0x7f9983847c00) at ../src/Common/ThreadPool.h:210
2023-03-02 07:45:26 thread_status = <incomplete type>
2023-03-02 07:45:26 scope_exit198 = {static is_nullable = false, function = {<No data fields>}}
2023-03-02 07:45:26 function = <optimized out>
2023-03-02 07:45:26 arguments = {__base_ = {<std::__1::__tuple_leaf<0ul, DB::ParallelParsingInputFormat*, false>> = {__value_ = <optimized out>}, <std::__1::__tuple_leaf<1ul, std::__1::shared_ptr<DB::ThreadGroupStatus>, false>> = {__value_ = {__ptr_ = 0x7f996558a618, __cntrl_ = 0x7f996558a600}}, <No data fields>}}
2023-03-02 07:45:26 #11 std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}&> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #12 std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}&) (__args=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:479
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #13 std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}, void ()>::operator()[abi:v15000]() (this=0x7f9983847c00) at ../contrib/llvm-project/libcxx/include/__functional/function.h:235
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #14 std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=<optimized out>) at ../contrib/llvm-project/libcxx/include/__functional/function.h:716
2023-03-02 07:45:26 __f = 0x7f9983847c00
2023-03-02 07:45:26 #15 0x000000000e194dea in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const (this=0x7f9331712a60) at ../contrib/llvm-project/libcxx/include/__functional/function.h:848
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #16 std::__1::function<void ()>::operator()() const (this=0x7f9331712a60) at ../contrib/llvm-project/libcxx/include/__functional/function.h:1187
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #17 ThreadPoolImpl<std::__1::thread>::worker (this=0x7f9a2399d2c0, thread_it=...) at ../src/Common/ThreadPool.cpp:315
2023-03-02 07:45:26 metric_active_threads = {what = <optimized out>, amount = 1}
2023-03-02 07:45:26 thread_trace_context = {root_span = {trace_id = {t = {items = {0, 0}}}, span_id = 0, parent_span_id = 0, operation_name = {static __endian_factor = 1, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0, __is_long_ = 0}, __s = {__data_ = '\000' <repeats 22 times>, __padding_ = 0x7f9331712b47 "", __size_ = 0 '\000', __is_long_ = 0 '\000'}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<std::__1::__non_trivial_if<true, std::__1::allocator<char> >> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, start_time_us = 0, finish_time_us = 0, attributes = {<std::__1::vector<DB::Field, AllocatorWithMemoryTracking<DB::Field> >> = {__begin_ = 0x0, __end_ = 0x0, __end_cap_ = {<std::__1::__compressed_pair_elem<DB::Field*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<AllocatorWithMemoryTracking<DB::Field>, 1, true>> = {<AllocatorWithMemoryTracking<DB::Field>> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}}, is_context_owner = true}
2023-03-02 07:45:26 job = {<std::__1::__function::__maybe_derive_from_unary_function<void ()>> = {<No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void ()>> = {<No data fields>}, __f_ = {__buf_ = {__small = "\000|\204\203\231\177\000\000\000\336\226\r\231\177\000", __large = 0x7f9983847c00}, __invoker_ = {__call_ = 0x14869a60 <std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*)>}, __policy_ = 0x4dcebe8 <std::__1::__function::__policy::__choose_policy[abi:v15000]<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ParallelParsingInputFormat::*)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*, std::__1::shared_ptr<DB::ThreadGroupStatus> >(void (DB::ParallelParsingInputFormat::*&&)(std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ParallelParsingInputFormat*&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::{lambda()#1}, void ()> >(std::__1::integral_constant<bool, false>)::__policy_>}}
2023-03-02 07:45:26 parent_thead_trace_context = {<DB::OpenTelemetry::TracingContext> = {trace_id = {t = {items = {0, 0}}}, span_id = 0, tracestate = {static __endian_factor = 1, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0, __is_long_ = 0}, __s = {__data_ = '\000' <repeats 22 times>, __padding_ = 0x7f9331712aef "", __size_ = 0 '\000', __is_long_ = 0 '\000'}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<std::__1::__non_trivial_if<true, std::__1::allocator<char> >> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, trace_flags = 0 '\000'}, span_log = {__ptr_ = 0x0, __cntrl_ = 0x0}}
2023-03-02 07:45:26 need_shutdown = true
2023-03-02 07:45:26 metric_all_threads = {what = <optimized out>, amount = 1}
2023-03-02 07:45:26 #18 0x000000000e19a4a1 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}::operator()() const (this=0x7f98661b71c8) at ../src/Common/ThreadPool.cpp:145
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #19 std::__1::__invoke[abi:v15000]<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> (__f=...) at ../contrib/llvm-project/libcxx/include/__functional/invoke.h:394
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #20 std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/llvm-project/libcxx/include/thread:284
2023-03-02 07:45:26 No locals.
2023-03-02 07:45:26 #21 std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}> >(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::{lambda()#2}>) (__vp=0x7f98661b71c0) at ../contrib/llvm-project/libcxx/include/thread:295
2023-03-02 07:45:26 __p = {__ptr_ = {<std::__1::__compressed_pair_elem<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, (lambda at ../src/Common/ThreadPool.cpp:145:42)> *, 0, false>> = {__value_ = 0x7f98661b71c0}, <std::__1::__compressed_pair_elem<std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, (lambda at ../src/Common/ThreadPool.cpp:145:42)> >, 1, true>> = {<std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, (lambda at ../src/Common/ThreadPool.cpp:145:42)> >> = {<No data fields>}, <No data fields>}, <No data fields>}}
2023-03-02 07:45:26 #22 0x00007f9a249ea609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
2023-03-02 07:45:26 No symbol table info available.
2023-03-02 07:45:26 #23 0x00007f9a2490f133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
2023-03-02 07:45:26 No symbol table info available.
```
More reports: [link](https://play.clickhouse.com/play?user=play#d2l0aCAKJyUlJyBhcyBuYW1lX3BhdHRlcm4sCiclZmlsZVNlZ21lbnRhdGlvbkVuZ2luZSUnIGFzIGNvbnRleHRfcGF0dGVybiwKJzIwMjItMDktMDEnIGFzIHN0YXJ0X2RhdGUsCig0NTQ2MSkgYXMgbm9pc3lfcHJzLAooJ1N0YXRlbGVzcyB0ZXN0cyAoYXNhbiknLCAnU3RhdGVsZXNzIHRlc3RzIChhZGRyZXNzKScsICdTdGF0ZWxlc3MgdGVzdHMgKGFkZHJlc3MsIGFjdGlvbnMpJykgYXMgYmFja3BvcnRfYW5kX3JlbGVhc2Vfc3BlY2lmaWNfY2hlY2tzCnNlbGVjdCAKdG9TdGFydE9mRGF5KGNoZWNrX3N0YXJ0X3RpbWUpIGFzIGQsCmNvdW50KCksICBncm91cFVuaXFBcnJheShwdWxsX3JlcXVlc3RfbnVtYmVyKSwgYW55KHJlcG9ydF91cmwpCmZyb20gY2hlY2tzIHdoZXJlIHN0YXJ0X2RhdGUgPD0gY2hlY2tfc3RhcnRfdGltZSBhbmQgcHVsbF9yZXF1ZXN0X251bWJlciBub3QgaW4gCihzZWxlY3QgcHVsbF9yZXF1ZXN0X251bWJlciBhcyBwcm4gZnJvbSBjaGVja3Mgd2hlcmUgcHJuIT0wIGFuZCBzdGFydF9kYXRlIDw9IGNoZWNrX3N0YXJ0X3RpbWUgYW5kIGNoZWNrX25hbWUgaW4gYmFja3BvcnRfYW5kX3JlbGVhc2Vfc3BlY2lmaWNfY2hlY2tzKSAKYW5kIHRlc3RfbmFtZSBsaWtlIG5hbWVfcGF0dGVybiBhbmQgdGVzdF9jb250ZXh0X3JhdyBpbGlrZSBjb250ZXh0X3BhdHRlcm4KYW5kIHB1bGxfcmVxdWVzdF9udW1iZXIgbm90IGluIG5vaXN5X3BycwphbmQgdGVzdF9zdGF0dXMgaW4gKCdGQUlMJywgJ0ZMQUtZJywgJ2ZhaWx1cmUnKSBncm91cCBieSBkIG9yZGVyIGJ5IGQgZGVzYw==)
cc: @nikitamikhaylov | https://github.com/ClickHouse/ClickHouse/issues/47159 | https://github.com/ClickHouse/ClickHouse/pull/49717 | 8997c6ef953cf96e6ed9e966d1ef034c19e1ec3d | ea979b40a962c1f7f30c8eeba8db6c0a55325906 | "2023-03-02T19:16:59Z" | c++ | "2023-05-11T20:53:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 47,030 | ["docs/en/operations/system-tables/index.md"] | No documentation for `system.build_options` | https://clickhouse.com/docs/en/operations/system-tables/ does not have information about build options table. | https://github.com/ClickHouse/ClickHouse/issues/47030 | https://github.com/ClickHouse/ClickHouse/pull/47430 | 2575db15221dbe3ebbf3c109ea67c1255dfb7b1b | c2a5318df4a91678850cf8f493e75728264cf139 | "2023-02-28T16:27:41Z" | c++ | "2023-07-04T23:51:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,975 | ["src/Interpreters/executeQuery.cpp", "tests/integration/test_log_levels_update/test.py", "tests/queries/0_stateless/00965_logs_level_bugfix.reference"] | Change log level for a particular message | ```
2023.02.24 09:19:12.816280 [ 27971 ] {08b03316-667a-481e-9169-5d457f636371} <Information> executeQuery: Read 4546 rows, 5.51 MiB in 0.05874 sec., 77391.89649302009 rows/sec., 93.77 MiB/sec.
```
Information -> Debug | https://github.com/ClickHouse/ClickHouse/issues/46975 | https://github.com/ClickHouse/ClickHouse/pull/46997 | 4f048ce535e80c9999a7e05f6e08d5fbe4b3c50c | cc5b97b62404dc297ba33b807b0bc321fa1592b4 | "2023-02-27T16:45:10Z" | c++ | "2023-03-08T09:36:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,944 | ["docs/en/sql-reference/table-functions/index.md"] | The tabular on table function page is incomple | https://clickhouse.com/docs/en/sql-reference/table-functions/
Need complete or maybe we can just remove it? | https://github.com/ClickHouse/ClickHouse/issues/46944 | https://github.com/ClickHouse/ClickHouse/pull/47026 | fc619d06ba76fa20ea741c599aea597bcdbbaa2a | 476942ae833d52b24a471b03223779716ca48755 | "2023-02-27T10:52:50Z" | c++ | "2023-02-28T15:27:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,874 | ["tests/integration/test_concurrent_queries_restriction_by_query_kind/test.py"] | Flaky test test_concurrent_queries_restriction_by_query_kind | https://s3.amazonaws.com/clickhouse-test-reports/46517/fb35a6852ea55a693250449dcc5a07fb30ac4c2e/integration_tests__asan__[3/6].html
https://play.clickhouse.com/play?user=play#d2l0aCAKJyV0ZXN0X2NvbmN1cnJlbnRfcXVlcmllc19yZXN0cmljdGlvbl9ieV9xdWVyeV9raW5kJScgYXMgbmFtZV9wYXR0ZXJuLAonJSUnIGFzIGNvbnRleHRfcGF0dGVybiwKJzIwMjItMDktMDEnIGFzIHN0YXJ0X2RhdGUsCig0NTQ2MSkgYXMgbm9pc3lfcHJzLAooJ1N0YXRlbGVzcyB0ZXN0cyAoYXNhbiknLCAnU3RhdGVsZXNzIHRlc3RzIChhZGRyZXNzKScsICdTdGF0ZWxlc3MgdGVzdHMgKGFkZHJlc3MsIGFjdGlvbnMpJykgYXMgYmFja3BvcnRfYW5kX3JlbGVhc2Vfc3BlY2lmaWNfY2hlY2tzCnNlbGVjdCAKdG9TdGFydE9mRGF5KGNoZWNrX3N0YXJ0X3RpbWUpIGFzIGQsCmNvdW50KCksICBncm91cFVuaXFBcnJheShwdWxsX3JlcXVlc3RfbnVtYmVyKSwgYW55KHJlcG9ydF91cmwpCmZyb20gY2hlY2tzIHdoZXJlIHN0YXJ0X2RhdGUgPD0gY2hlY2tfc3RhcnRfdGltZSBhbmQgcHVsbF9yZXF1ZXN0X251bWJlciBub3QgaW4gCihzZWxlY3QgcHVsbF9yZXF1ZXN0X251bWJlciBhcyBwcm4gZnJvbSBjaGVja3Mgd2hlcmUgcHJuIT0wIGFuZCBzdGFydF9kYXRlIDw9IGNoZWNrX3N0YXJ0X3RpbWUgYW5kIGNoZWNrX25hbWUgaW4gYmFja3BvcnRfYW5kX3JlbGVhc2Vfc3BlY2lmaWNfY2hlY2tzKSAKYW5kIHRlc3RfbmFtZSBsaWtlIG5hbWVfcGF0dGVybiBhbmQgdGVzdF9jb250ZXh0X3JhdyBpbGlrZSBjb250ZXh0X3BhdHRlcm4KYW5kIHB1bGxfcmVxdWVzdF9udW1iZXIgbm90IGluIG5vaXN5X3BycwphbmQgdGVzdF9zdGF0dXMgaW4gKCdGQUlMJywgJ0ZMQUtZJywgJ2ZhaWx1cmUnKSBncm91cCBieSBkIG9yZGVyIGJ5IGQgZGVzYw== | https://github.com/ClickHouse/ClickHouse/issues/46874 | https://github.com/ClickHouse/ClickHouse/pull/46887 | 3a3a2f352c63b54de1b5fe082d57c1b6236e5500 | 936f57e7b2bca63539105e3d15db9e91fb9eab89 | "2023-02-25T18:45:40Z" | c++ | "2023-02-26T01:30:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,855 | ["src/Functions/FunctionsHashing.h", "src/Functions/IFunction.cpp", "src/Functions/map.cpp", "tests/queries/0_stateless/02673_map_hashing_msan.reference", "tests/queries/0_stateless/02673_map_hashing_msan.sql"] | MSan report in ColumnUnique (LowCardinality) | **How to reproduce**
```
milovidov-desktop :) SET allow_suspicious_low_cardinality_types = 1
SET allow_suspicious_low_cardinality_types = 1
Query id: 74086d0f-0d7c-4ef1-beee-dfa586bd712f
Ok.
0 rows in set. Elapsed: 0.003 sec.
milovidov-desktop :) CREATE TEMPORARY TABLE datetime__fuzz_14 (`d` LowCardinality(Nullable(UInt128)));
CREATE TEMPORARY TABLE datetime__fuzz_14
(
`d` LowCardinality(Nullable(UInt128))
)
Query id: 5d8fc3d3-39cf-44cb-ae40-3161fbaf0263
Ok.
0 rows in set. Elapsed: 0.004 sec.
milovidov-desktop :) SELECT max(mapPopulateSeries(mapPopulateSeries(map(toInt64(1048575), toInt64(9223372036854775806), 3, -2147483649))), toInt64(1048575), map('11', 257, '', NULL), cityHash64(*)) > NULL FROM (SELECT max(cityHash64(mapPopulateSeries(mapPopulateSeries(map(toInt64(1048575), toInt64(2147483646), 65537, -2147483649))), *)) > NULL, map(toInt64(-2147483647), toInt64(100.0001), -2147483647, NULL), mapPopulateSeries(map(toInt64(1024), toInt64(1048576), 1048575, -1)), map(toInt64(256), toInt64(NULL), -1, NULL), quantile(0.0001)(d) FROM datetime__fuzz_14 WITH TOTALS)
SELECT max(mapPopulateSeries(mapPopulateSeries(map(toInt64(1048575), toInt64(9223372036854775806), 3, -2147483649))), toInt64(1048575), map('11', 257, '', NULL), cityHash64(*)) > NULL
FROM
(
SELECT
max(cityHash64(mapPopulateSeries(mapPopulateSeries(map(toInt64(1048575), toInt64(2147483646), 65537, -2147483649))), *)) > NULL,
map(toInt64(-2147483647), toInt64(100.0001), -2147483647, NULL),
mapPopulateSeries(map(toInt64(1024), toInt64(1048576), 1048575, -1)),
map(toInt64(256), toInt64(NULL), -1, NULL),
quantile(0.0001)(d)
FROM datetime__fuzz_14
WITH TOTALS
)
```
| https://github.com/ClickHouse/ClickHouse/issues/46855 | https://github.com/ClickHouse/ClickHouse/pull/46856 | 960a0b658264d2ee43360b6417412fcc24e692ce | 9183933330b6d29454b98d24ded2b2f5d05714cd | "2023-02-25T06:03:46Z" | c++ | "2023-02-25T16:20:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,854 | ["utils/self-extracting-executable/decompressor.cpp"] | If `clickhouse` was run as self-extracting executable under sudo, keep the file uid/gid after extracting. | **Use case**
```
sudo ./clickhouse install
``` | https://github.com/ClickHouse/ClickHouse/issues/46854 | https://github.com/ClickHouse/ClickHouse/pull/47116 | a04b38db907b8e01760a45ab8cf7aa0acab45d5c | 65d671b7c72c7b1da23f831faa877565cf34f92c | "2023-02-25T02:29:09Z" | c++ | "2023-03-09T06:10:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,852 | ["programs/local/LocalServer.cpp", "programs/server/Server.cpp", "src/IO/ReadWriteBufferFromHTTP.h"] | "Too many open files" in `clickhouse-local` leads to long backoff while reading from `web` table | **Describe the unexpected behaviour**
```
clickhouse-local
CREATE DATABASE test;
USE test;
SET max_threads = 96;
SET send_logs_level = 'trace';
ATTACH TABLE github_events UUID '127f4241-4a9b-4ecd-8a84-846b88069cb5'
(
`file_time` DateTime,
`event_type` Enum8('CommitCommentEvent' = 1, 'CreateEvent' = 2, 'DeleteEvent' = 3, 'ForkEvent' = 4, 'GollumEvent' = 5, 'IssueCommentEvent' = 6, 'IssuesEvent' = 7, 'MemberEvent' = 8, 'PublicEvent' = 9, 'PullRequestEvent' = 10, 'PullRequestReviewCommentEvent' = 11, 'PushEvent' = 12, 'ReleaseEvent' = 13, 'SponsorshipEvent' = 14, 'WatchEvent' = 15, 'GistEvent' = 16, 'FollowEvent' = 17, 'DownloadEvent' = 18, 'PullRequestReviewEvent' = 19, 'ForkApplyEvent' = 20, 'Event' = 21, 'TeamAddEvent' = 22),
`actor_login` LowCardinality(String),
`repo_name` LowCardinality(String),
`created_at` DateTime,
`updated_at` DateTime,
`action` Enum8('none' = 0, 'created' = 1, 'added' = 2, 'edited' = 3, 'deleted' = 4, 'opened' = 5, 'closed' = 6, 'reopened' = 7, 'assigned' = 8, 'unassigned' = 9, 'labeled' = 10, 'unlabeled' = 11, 'review_requested' = 12, 'review_request_removed' = 13, 'synchronize' = 14, 'started' = 15, 'published' = 16, 'update' = 17, 'create' = 18, 'fork' = 19, 'merged' = 20),
`comment_id` UInt64,
`body` String,
`path` String,
`position` Int32,
`line` Int32,
`ref` LowCardinality(String),
`ref_type` Enum8('none' = 0, 'branch' = 1, 'tag' = 2, 'repository' = 3, 'unknown' = 4),
`creator_user_login` LowCardinality(String),
`number` UInt32,
`title` String,
`labels` Array(LowCardinality(String)),
`state` Enum8('none' = 0, 'open' = 1, 'closed' = 2),
`locked` UInt8,
`assignee` LowCardinality(String),
`assignees` Array(LowCardinality(String)),
`comments` UInt32,
`author_association` Enum8('NONE' = 0, 'CONTRIBUTOR' = 1, 'OWNER' = 2, 'COLLABORATOR' = 3, 'MEMBER' = 4, 'MANNEQUIN' = 5),
`closed_at` DateTime,
`merged_at` DateTime,
`merge_commit_sha` String,
`requested_reviewers` Array(LowCardinality(String)),
`requested_teams` Array(LowCardinality(String)),
`head_ref` LowCardinality(String),
`head_sha` String,
`base_ref` LowCardinality(String),
`base_sha` String,
`merged` UInt8,
`mergeable` UInt8,
`rebaseable` UInt8,
`mergeable_state` Enum8('unknown' = 0, 'dirty' = 1, 'clean' = 2, 'unstable' = 3, 'draft' = 4),
`merged_by` LowCardinality(String),
`review_comments` UInt32,
`maintainer_can_modify` UInt8,
`commits` UInt32,
`additions` UInt32,
`deletions` UInt32,
`changed_files` UInt32,
`diff_hunk` String,
`original_position` UInt32,
`commit_id` String,
`original_commit_id` String,
`push_size` UInt32,
`push_distinct_size` UInt32,
`member_login` LowCardinality(String),
`release_tag_name` String,
`release_name` String,
`review_state` Enum8('none' = 0, 'approved' = 1, 'changes_requested' = 2, 'commented' = 3, 'dismissed' = 4, 'pending' = 5)
)
ENGINE = MergeTree
ORDER BY (event_type, repo_name, created_at)
SETTINGS disk = disk(type = 'web', endpoint = 'http://clickhouse-public-datasets.s3.amazonaws.com/web/');
SELECT sum(cityHash64(*)) FROM github_events WHERE event_type = 'IssueCommentEvent';
```
https://pastila.nl/?002860cf/344ad6b36c29021778c4aa6ad7290515
| https://github.com/ClickHouse/ClickHouse/issues/46852 | https://github.com/ClickHouse/ClickHouse/pull/46853 | c4bf503690019227c9067d0e1343ea3038045728 | 1ca6156b07eaaea35dfece8eca7878ec22dfb60d | "2023-02-25T02:00:58Z" | c++ | "2023-02-25T19:57:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,816 | ["src/IO/parseDateTimeBestEffort.cpp", "tests/queries/0_stateless/01442_date_time_with_params.reference", "tests/queries/0_stateless/01442_date_time_with_params.sql"] | Dec 15, 2021 support for parseDateTimeBestEffort function | **Use case**
```
SELECT parseDateTimeBestEffort('Dec 15, 2021')
Query id: b849327e-3893-4a97-9a6a-da59b61f4a7a
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 22.13.1):
Code: 6. DB::Exception: Received from localhost:9000. DB::Exception: Cannot parse string 'Dec 15, 2021' as DateTime: syntax error at position 6 (parsed just 'Dec 15'): While processing parseDateTimeBestEffort('Dec 15, 2021'). (CANNOT_PARSE_TEXT)
SELECT parseDateTimeBestEffort('Dec 15 2021')
Query id: 988b6d95-d22c-4882-a835-07dc8261a7bb
ββparseDateTimeBestEffort('Dec 15 2021')ββ
β 2021-12-15 00:00:00 β
ββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/46816 | https://github.com/ClickHouse/ClickHouse/pull/47071 | ce8e49a9a0117d3906fc6d32bd23fb3bf288a4b5 | 16eedb004f58a3c8507cf605f8e563df84dcf1e5 | "2023-02-24T10:32:20Z" | c++ | "2023-03-02T11:00:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,741 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/InterpreterSelectWithUnionQuery.cpp", "src/Interpreters/JoinedTables.cpp", "src/Interpreters/JoinedTables.h", "src/Interpreters/getTableExpressions.cpp", "src/Interpreters/getTableExpressions.h", "tests/queries/0_stateless/02428_parameterized_view.reference", "tests/queries/0_stateless/02428_parameterized_view.sh"] | Parametrized view - what is the syntax to use in IN clause? Is it possible to use parameter in subquery? | It's not clear from the [documentation](https://clickhouse.com/docs/en/sql-reference/statements/create/view/) or [Altinity article](https://kb.altinity.com/altinity-kb-queries-and-syntax/altinity-kb-parameterized-views/) how to use parametrized views.
Parametrized view works fine with the equality clause:
```
create or replace view live_agents_pv2 as
select * FROM default.service_library sl
where account_id = {account_id2:Int32}
```
Fails with error UNKNOWN_QUERY_PARAMETER with IN clause:
```
create or replace view live_agents_pv2 as
select * FROM default.service_library sl
where account_id in ({account_id2:Array(Int32)})
```
It also fails when I use a parameter in a nested query:
```
create or replace view live_agents_pv2 as
select * from (
select * FROM default.service_library sl
where account_id = {account_id2:Int32} )
```
version 23.1.3.5
DDL:
```
CREATE TABLE default.service_library
(
account_id Int32,
checksum String,
cluster_agent_id Int32,
entity_guid String,
language String,
name String,
real_agent_id Int32,
version String,
created_at DateTime DEFAULT now(),
run_id Int32,
is_dead UInt8 DEFAULT 0
)
ENGINE = ReplacingMergeTree(created_at)
ORDER BY (account_id, entity_guid, name, version, cluster_agent_id, real_agent_id)
TTL created_at + toIntervalDay(3)
``` | https://github.com/ClickHouse/ClickHouse/issues/46741 | https://github.com/ClickHouse/ClickHouse/pull/47725 | 069cac8d6ea186e3fae8a9172cc9f4d0da9c4edf | 464b166e91fcd0d02007b1c8b8b01a7c3b58e9b2 | "2023-02-22T16:32:19Z" | c++ | "2023-03-22T08:05:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,733 | ["src/Compression/CompressionCodecT64.cpp", "tests/queries/0_stateless/25338_ipv4_codec_t64.reference", "tests/queries/0_stateless/25338_ipv4_codec_t64.sql"] | IPv4 Native Type Issues | The introduction of native IPv4 in release 23.1 (https://github.com/ClickHouse/ClickHouse/pull/43221) caused some backward compatability issues with:
- T64 Codec
- Bloom Filter for secondary (skip) index
To reproduce (https://fiddle.clickhouse.com/bf7da0b7-ec44-4bd1-978b-487dae5a62fd):
```
CREATE TABLE users (
uid Int16,
name String,
ip IPv4 CODEC(T64, ZSTD(1)),
INDEX ip_idx ip TYPE bloom_filter GRANULARITY 4)
ENGINE=MergeTree
ORDER BY uid;
INSERT INTO users VALUES (1231, 'John', '1.1.1.1');
INSERT INTO users VALUES (6666, 'Ksenia', '2.2.2.2');
INSERT INTO users VALUES (8888, 'Alice', '3.3.3.3');
SELECT * FROM users;
```
Will result in the following errors respectively:
```
Received exception from server (version 23.1.3):
Code: 431. DB::Exception: Received from localhost:9000. DB::Exception: T64 codec is not supported for specified type IPv4. (ILLEGAL_SYNTAX_FOR_CODEC_TYPE)
```
```
Received exception from server (version 23.1.3):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Unexpected type IPv4 of bloom filter index.. (ILLEGAL_COLUMN)
```
| https://github.com/ClickHouse/ClickHouse/issues/46733 | https://github.com/ClickHouse/ClickHouse/pull/46747 | b8b6d597aec37bbc2ec1d052516f8a4cd16c0bd4 | 71afa8d1879b1c01f3fd9599f7a37f084ddfb08d | "2023-02-22T15:40:27Z" | c++ | "2023-02-23T13:34:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,726 | ["src/Interpreters/HashJoin.cpp"] | table with Join engine became 10x slower to load in 23.1 compared to 22.12 | Join tables was loading in ~10minutes in version 22.12 now takes 1h30 in 23.1, same dataset.
The table is has engine `Join(ANY, LEFT, single_field_pk)` with setting `join_use_nulls=1`
`
2023.02.22 12:51:42.631560 [ 1997740 ] {} <Information> StorageSetOrJoinBase: Loaded from backup file store/a2b/a2b1850a-e450-40ee-8433-c9df42075ced/1.bin. 1004400840 rows, 86.06 GiB. State has 1004400840 unique rows.`
```sudo du -sh /disk1/clickhouse/store/a2b/a2b1850a-e450-40ee-8433-c9df42075ced/1.bin
19G /disk1/clickhouse/store/a2b/a2b1850a-e450-40ee-8433-c9df42075ced/1.bin
```
Here is a perf profile of the loading thread, which is clearly cpu bound :
```
65.17% ThreadPool clickhouse [.] DB::ColumnNullable::allocatedBytes
24.04% ThreadPool clickhouse [.] DB::HashJoin::getTotalByteCount β
5.28% ThreadPool clickhouse [.] DB::ColumnVector<float>::allocatedBytes β
2.96% ThreadPool clickhouse [.] DB::ColumnVector<unsigned int>::allocatedBytes β
1.44% ThreadPool clickhouse [.] DB::(anonymous namespace)::insertFromBlockImplTypeCase<(DB::JoinStrictnβ
0.19% ThreadPool clickhouse [.] LZ4::(anonymous namespace)::decompressImpl<16ul, true> β
0.04% ThreadPool clickhouse [.] memcpy
```
which sounds quite suspicious.
```
CREATE TABLE table
(
`pk` UInt64,
`metrics_0` UInt32 CODEC(ZSTD(1)),
`metrics_1` UInt32 CODEC(ZSTD(1)),
`metrics_2` UInt32 CODEC(ZSTD(1)),
`metrics_3` Float32 CODEC(ZSTD(1)),
`metrics_4` Float32 CODEC(ZSTD(1)),
`metrics_5` Float32 CODEC(ZSTD(1)),
`metrics_6` Float32 CODEC(ZSTD(1)),
`metrics_7` UInt32 CODEC(ZSTD(1)),
`metrics_8` UInt32 CODEC(ZSTD(1)),
`metrics_9` UInt32 CODEC(ZSTD(1)),
`metrics_10` Float32 CODEC(ZSTD(1)),
`metrics_11` Float32 CODEC(ZSTD(1)),
`metrics_12` Float32 CODEC(ZSTD(1)),
`metrics_13` Float32 CODEC(ZSTD(1)),
`metrics_14` UInt32 CODEC(ZSTD(1)),
`metrics_15` UInt32 CODEC(ZSTD(1)),
`metrics_16` UInt32 CODEC(ZSTD(1)),
`metrics_17` Float32 CODEC(ZSTD(1)),
`metrics_18` Float32 CODEC(ZSTD(1)),
`metrics_19` Float32 CODEC(ZSTD(1)),
`metrics_20` Float32 CODEC(ZSTD(1))
)
ENGINE = Join(ANY, LEFT, pk)
SETTINGS join_use_nulls = 1
``` | https://github.com/ClickHouse/ClickHouse/issues/46726 | https://github.com/ClickHouse/ClickHouse/pull/47647 | a72f0cae072ea6385259804c7a946688511a30e1 | ff56d49ce244ab5983786e793877c403a7aa346b | "2023-02-22T13:12:03Z" | c++ | "2023-03-16T18:39:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,704 | ["src/Interpreters/inplaceBlockConversions.cpp", "tests/queries/0_stateless/02680_lc_null_as_default.reference", "tests/queries/0_stateless/02680_lc_null_as_default.sql"] | Logical error: block structure mismatch when inserting into Memory table with DEFAULT | ```
milovidov@milovidov-desktop:~/Downloads$ clickhouse-local
ClickHouse local version 23.2.1.1.
milovidov-desktop :) CREATE TABLE test_null_as_default__fuzz_46 (`a` Nullable(DateTime64(3)), `b` LowCardinality(Float32) DEFAULT a + 1000) ENGINE = Memory;
CREATE TABLE test_null_as_default__fuzz_46
(
`a` Nullable(DateTime64(3)),
`b` LowCardinality(Float32) DEFAULT a + 1000
)
ENGINE = Memory
Query id: b981a570-a090-4790-835c-f611346d90c8
0 rows in set. Elapsed: 0.161 sec.
Received exception:
Code: 455. DB::Exception: Creating columns of type LowCardinality(Float32) is prohibited by default due to expected negative impact on performance. It can be enabled with the "allow_suspicious_low_cardinality_types" setting. (SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY)
milovidov-desktop :) SET allow_suspicious_low_cardinality_types = 1
SET allow_suspicious_low_cardinality_types = 1
Query id: 67ec54e7-b771-43a8-9057-954f8a05f7a1
Ok.
0 rows in set. Elapsed: 0.000 sec.
milovidov-desktop :) CREATE TABLE test_null_as_default__fuzz_46 (`a` Nullable(DateTime64(3)), `b` LowCardinality(Float32) DEFAULT a + 1000) ENGINE = Memory;
CREATE TABLE test_null_as_default__fuzz_46
(
`a` Nullable(DateTime64(3)),
`b` LowCardinality(Float32) DEFAULT a + 1000
)
ENGINE = Memory
Query id: 90e6e6fe-6780-4229-9d9f-a3757fbf5aef
Ok.
0 rows in set. Elapsed: 0.000 sec.
milovidov-desktop :) INSERT INTO test_null_as_default__fuzz_46 SELECT 1, NULL UNION ALL SELECT 2, NULL;
INSERT INTO test_null_as_default__fuzz_46 SELECT
1,
NULL
UNION ALL
SELECT
2,
NULL
Query id: d3c73a37-d3a4-4cc3-9c5e-07600198acef
0 rows in set. Elapsed: 0.017 sec.
Received exception:
Code: 49. DB::Exception: Block structure mismatch in function connect between ConvertingTransform and MemorySink stream: different types:
b Float32 Float32(size = 0)
b LowCardinality(Float32) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, Float32(size = 1))). (LOGICAL_ERROR)
milovidov-desktop :) INSERT INTO test_null_as_default__fuzz_46 SELECT 1, NULL;
INSERT INTO test_null_as_default__fuzz_46 SELECT
1,
NULL
Query id: 4b594185-5fb5-4cbb-a48c-f42c7ab38e99
0 rows in set. Elapsed: 0.001 sec.
Received exception:
Code: 49. DB::Exception: Block structure mismatch in function connect between ConvertingTransform and MemorySink stream: different types:
b Float32 Float32(size = 0)
b LowCardinality(Float32) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, Float32(size = 1))). (LOGICAL_ERROR)
``` | https://github.com/ClickHouse/ClickHouse/issues/46704 | https://github.com/ClickHouse/ClickHouse/pull/47537 | d3a514d22175c3f756a969d99523b7472011d04a | be0ae6a23827e11d3169871d5cca1f5ebe45195f | "2023-02-22T02:01:03Z" | c++ | "2023-03-14T10:45:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,694 | ["src/Analyzer/Passes/AggregateFunctionsArithmericOperationsPass.cpp", "src/Interpreters/ArithmeticOperationsInAgrFuncOptimize.cpp", "tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func_long.reference", "tests/queries/0_stateless/02498_analyzer_aggregate_functions_arithmetic_operations_pass_fix.reference", "tests/queries/0_stateless/02498_analyzer_aggregate_functions_arithmetic_operations_pass_fix.sql"] | optimize_arithmetic_operations_in_aggregate_functions cannot be used with max / min ... | ```sql
select max(100-c1), min(100-c1) from values ((0),(100))
settings optimize_arithmetic_operations_in_aggregate_functions=0;
ββmax(minus(100, c1))ββ¬βmin(minus(100, c1))ββ
β 100 β 0 β OK
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
select max(100-c1), min(100-c1) from values ((0),(100))
settings optimize_arithmetic_operations_in_aggregate_functions=1;
ββminus(100, max(c1))ββ¬βminus(100, min(c1))ββ
β 0 β 100 β Not OK
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/46694 | https://github.com/ClickHouse/ClickHouse/pull/46705 | 4c7ac6cd899417478ddfebc90eebd0ce42dfcbc0 | e800557743ba619f14eded421a7f295ad3b68a6f | "2023-02-21T22:38:12Z" | c++ | "2023-03-01T05:49:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,614 | ["tests/queries/0_stateless/01710_projection_optimize_materialize.sql"] | Flaky test 01710_projection_optimize_materialize | https://s3.amazonaws.com/clickhouse-test-reports/46583/e5e102763710b30f739055b9120b452e1679c6d7/stateless_tests__debug__s3_storage__[3/6].html
https://play.clickhouse.com/play?user=play#c2VsZWN0IAp0b1N0YXJ0T2ZIb3VyKGNoZWNrX3N0YXJ0X3RpbWUpIGFzIGQsCmNvdW50KCksICBncm91cFVuaXFBcnJheShwdWxsX3JlcXVlc3RfbnVtYmVyKSwgIGFueShyZXBvcnRfdXJsKQpmcm9tIGNoZWNrcyB3aGVyZSAnMjAyMi0wNi0wMScgPD0gY2hlY2tfc3RhcnRfdGltZSBhbmQgdGVzdF9uYW1lIGxpa2UgJyUwMTcxMF9wcm9qZWN0aW9uX29wdGltaXplX21hdGVyaWFsaXplJScgYW5kIHRlc3Rfc3RhdHVzIGluICgnRkFJTCcsICdGTEFLWScpIGdyb3VwIGJ5IGQgb3JkZXIgYnkgZCBkZXNj | https://github.com/ClickHouse/ClickHouse/issues/46614 | https://github.com/ClickHouse/ClickHouse/pull/48276 | 599d67c4eea6e5d11f3ada0859733e5240d747ca | 2933c6b9be6a6313f8224ec7c90e4508219199b7 | "2023-02-20T17:24:36Z" | c++ | "2023-04-03T03:24:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,557 | ["src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp", "src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h", "src/Storages/MergeTree/MergeTreeSource.cpp", "src/Storages/MergeTree/MergeTreeSource.h", "tests/queries/0_stateless/02666_progress_when_no_rows_from_prewhere.reference", "tests/queries/0_stateless/02666_progress_when_no_rows_from_prewhere.sh"] | Zero CPU usage and no progress displayed when no blocks are returned if optimization of PREWHERE is performed. | ```
SELECT user_screen_name, text FROM twitter WHERE text LIKE '%fg;jkglmsdn874fdskjlsfdghn%'
```
| https://github.com/ClickHouse/ClickHouse/issues/46557 | https://github.com/ClickHouse/ClickHouse/pull/46611 | 49330b373c3315008016d444921b5c4bb394b305 | b61bb56a5abe0ab02fe15c893471a686182756e6 | "2023-02-19T00:09:20Z" | c++ | "2023-02-21T20:27:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,465 | ["src/Storages/MergeTree/DataPartsExchange.cpp", "tests/integration/test_zero_copy_fetch/__init__.py", "tests/integration/test_zero_copy_fetch/configs/storage_conf.xml", "tests/integration/test_zero_copy_fetch/test.py"] | New replica download parts to default disk instead of external disk | **Describe what's wrong**
Newly added replica downloads all data to default disk regardless of its location on other replicas.
**Does it reproduce on recent release?**
Experienced in our production on version 22.8.11.15
**How to reproduce**
Consider two replicas with following config for externa storage:
```
<yandex>
<merge_tree>
<allow_remote_fs_zero_copy_replication>true</allow_remote_fs_zero_copy_replication>
</merge_tree>
<storage_configuration>
<disks>
<s3_debug>
<type>s3</type>
<endpoint>https://s3.some-host/test/debug/001/</endpoint>
<access_key_id>xxxxxxxxxxxxxxxxxxxx</access_key_id>
<secret_access_key>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</secret_access_key>
</s3_debug>
</disks>
<policies>
<s3_debug>
<volumes>
<default>
<disk>default</disk>
</default>
<external>
<disk>s3_debug</disk>
<prefer_not_to_merge>False</prefer_not_to_merge>
<perform_ttl_move_on_insert>True</perform_ttl_move_on_insert>
</external>
</volumes>
</s3_debug>
</policies>
</storage_configuration>
</yandex>
```
Cretate test table on one replica and insert some data:
```
CREATE DATABASE debug;
CREATE TABLE debug.test1 (EventDate Date, CounterID UInt32)
ENGINE = ReplicatedMergeTree('/clickhouse-tables/{shard}/test1', '{replica}')
PARTITION BY toMonday(EventDate)
ORDER BY (CounterID, EventDate)
SAMPLE BY intHash32(CounterID)
SETTINGS index_granularity = 8192, storage_policy = 's3_debug'
INSERT INTO debug.test1 SELECT toDate('2023-01-01') + toIntervalDay(number), number + 1000 from system.numbers limit 20;
```
Look at partitions:
```
SELECT
database,
table,
partition,
disk_name,
count(*) AS parts,
sum(rows) AS total_rows,
sum(bytes_on_disk) AS total_bytes_on_disk,
min(min_date) AS min_date_in_partition,
max(max_date) AS max_date_in_partition
FROM system.parts
WHERE (database = 'debug') AND (table = 'test1') AND (active = 1)
GROUP BY
database,
table,
partition,
disk_name
ORDER BY partition ASC
ββdatabaseββ¬βtableββ¬βpartitionβββ¬βdisk_nameββ¬βpartsββ¬βtotal_rowsββ¬βtotal_bytes_on_diskββ¬βmin_date_in_partitionββ¬βmax_date_in_partitionββ
β debug β test1 β 2022-12-26 β default β 1 β 1 β 173 β 2023-01-01 β 2023-01-01 β
β debug β test1 β 2023-01-02 β default β 1 β 7 β 209 β 2023-01-02 β 2023-01-08 β
β debug β test1 β 2023-01-09 β default β 1 β 7 β 209 β 2023-01-09 β 2023-01-15 β
β debug β test1 β 2023-01-16 β default β 1 β 5 β 197 β 2023-01-16 β 2023-01-20 β
ββββββββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
```
all data is local.
Move partitions to external storage:
```
ALTER TABLE debug.test1 MOVE PARTITION '2022-12-26' TO DISK 's3_debug';
ALTER TABLE debug.test1 MOVE PARTITION '2023-01-02' TO DISK 's3_debug';
ALTER TABLE debug.test1 MOVE PARTITION '2023-01-09' TO DISK 's3_debug';
```
And look again at partitons:
```
ββdatabaseββ¬βtableββ¬βpartitionβββ¬βdisk_nameββ¬βpartsββ¬βtotal_rowsββ¬βtotal_bytes_on_diskββ¬βmin_date_in_partitionββ¬βmax_date_in_partitionββ
β debug β test1 β 2022-12-26 β s3_debug β 1 β 1 β 173 β 2023-01-01 β 2023-01-01 β
β debug β test1 β 2023-01-02 β s3_debug β 1 β 7 β 209 β 2023-01-02 β 2023-01-08 β
β debug β test1 β 2023-01-09 β s3_debug β 1 β 7 β 209 β 2023-01-09 β 2023-01-15 β
β debug β test1 β 2023-01-16 β default β 1 β 5 β 197 β 2023-01-16 β 2023-01-20 β
ββββββββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
```
Some data on external storage.
Then switch to second replica and create table:
```
CREATE DATABASE debug;
CREATE TABLE debug.test1 (EventDate Date, CounterID UInt32)
ENGINE = ReplicatedMergeTree('/clickhouse-tables/{shard}/test1', '{replica}')
PARTITION BY toMonday(EventDate)
ORDER BY (CounterID, EventDate)
SAMPLE BY intHash32(CounterID)
SETTINGS index_granularity = 8192, storage_policy = 's3_debug'
```
And look at partitions of newly created replica
```
ββdatabaseββ¬βtableββ¬βpartitionβββ¬βdisk_nameββ¬βpartsββ¬βtotal_rowsββ¬βtotal_bytes_on_diskββ¬βmin_date_in_partitionββ¬βmax_date_in_partitionββ
β debug β test1 β 2022-12-26 β default β 1 β 1 β 173 β 2023-01-01 β 2023-01-01 β
β debug β test1 β 2023-01-02 β default β 1 β 7 β 209 β 2023-01-02 β 2023-01-08 β
β debug β test1 β 2023-01-09 β default β 1 β 7 β 209 β 2023-01-09 β 2023-01-15 β
β debug β test1 β 2023-01-16 β default β 1 β 5 β 197 β 2023-01-16 β 2023-01-20 β
ββββββββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββ΄ββββββββ΄βββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
```
The data is here, but on default disk instead of external storage.
**Expected behavior**
The data on newly created replica shoud be on external storage as on first replica.
**Additional context**
There will be some complications because of possible different storage_policy settings and different storage policies itself.
But it is a blocker for scenario of replica redeployment when we need to reinstall operating system from scratch and recreate replicated tables in case of hardware failures. It will be impossible to redeploy if total amount of data is more than local disk size.
Probably CREATE TABLE should fail in case of different storage_policy settings and differences in storage policies.
And it will be more complicated in case of more than two replicas.
| https://github.com/ClickHouse/ClickHouse/issues/46465 | https://github.com/ClickHouse/ClickHouse/pull/47010 | b7370865a775cdf24f877a3855e2ed5e9644a2b2 | ebba37b18f28f162db6bc5e4bf92680dc5d54ad9 | "2023-02-16T07:59:04Z" | c++ | "2023-03-01T14:55:00Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.