status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,140 | ["docs/en/sql-reference/aggregate-functions/reference/any.md", "src/AggregateFunctions/AggregateFunctionAny.cpp", "tests/queries/0_stateless/02813_any_value.reference", "tests/queries/0_stateless/02813_any_value.sql"] | Add `any_value` as a compatibility alias for `any` | **Use case**
Compatibility with BigQuery.
**Describe the solution you'd like**
Simply add this alias. It should be case-insensitive. | https://github.com/ClickHouse/ClickHouse/issues/52140 | https://github.com/ClickHouse/ClickHouse/pull/52147 | c960b3b533b748027f6be5e27022b84143b43058 | 8964de808ea16f141a25cfedb6c41713fd395d5c | "2023-07-15T18:13:49Z" | c++ | "2023-07-25T11:05:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,139 | ["docs/en/sql-reference/aggregate-functions/reference/arrayconcatagg.md", "src/AggregateFunctions/AggregateFunctionFactory.cpp", "src/AggregateFunctions/AggregateFunctionGroupArray.cpp", "src/Common/IFactoryWithAliases.h", "tests/queries/0_stateless/02813_array_concat_agg.reference", "tests/queries/0_stateless/02813_array_concat_agg.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Add `array_concat_agg` for compatibility with BigQuery | It should be an alias to `groupArrayArray` aggregate function in ClickHouse.
The task is slightly more complex than https://github.com/ClickHouse/ClickHouse/issues/52100 because the `groupArrayArray` is a combination of `groupArray` and the `Array` combinator. | https://github.com/ClickHouse/ClickHouse/issues/52139 | https://github.com/ClickHouse/ClickHouse/pull/52149 | 46e03432a928981fc3c5924cae8e6184fea86c27 | f4e095b50237c891ade2f8dba332963419cde91d | "2023-07-15T18:12:34Z" | c++ | "2023-07-18T00:03:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,107 | ["src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "src/Storages/MergeTree/MergeTreeDataSelectExecutor.h", "tests/queries/0_stateless/01786_explain_merge_tree.reference", "tests/queries/0_stateless/02354_annoy_index.reference", "tests/queries/0_stateless/02354_usearch_index.reference", "tests/queries/0_stateless/02866_size_of_marks_skip_idx_explain.reference", "tests/queries/0_stateless/02866_size_of_marks_skip_idx_explain.sql"] | why set type index should invalidοΌbut the explain shows Selected Granules is less than Initial Granules | i have a big data table, there is a column with non-repetitive datas. there is a set type index on this column. the max_rows was set 100.( index_granularity = 8192, and GRANULARITY of the index is 16. so i think the row number is 131072, is bigger than max_rows which be set as 100, so as the official document said, this index should be invalid )
but where i execute explain indexes = 1, it show, after using this index ,Selected Granules is less than Initial Granules, it shows the index is valid, so im confused about this.
the explain info of this index :
"Type": "Skip",
"Name": "a",
"Description": "set GRANULARITY 16",
"Initial Parts": 8,
"Selected Parts": 8,
"Initial Granules": 41323,
"Selected Granules": 3201 | https://github.com/ClickHouse/ClickHouse/issues/52107 | https://github.com/ClickHouse/ClickHouse/pull/53616 | a190efed837b55f4263da043fb502c63737b5873 | 45d924c62c4390aa31db3d443a0e353ef692642b | "2023-07-14T06:16:00Z" | c++ | "2023-08-24T14:46:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,100 | ["docs/en/sql-reference/aggregate-functions/reference/grouparray.md", "src/AggregateFunctions/AggregateFunctionGroupArray.cpp", "tests/queries/0_stateless/02813_array_agg.reference", "tests/queries/0_stateless/02813_array_agg.sql"] | A compatibility alias `array_agg` for PostgreSQL, equivalent to the `groupArray` aggregate function. | **Use case**
Someone send a PostgreSQL query to ClickHouse and forgot that it is ClickHouse.
**Describe the solution you'd like**
It is nice to introduce an alias.
**Additional context**
This task is trivial. There is a chance the behavior is different in regard to aggregation of NULL values, but it is ok for the scope of this task. | https://github.com/ClickHouse/ClickHouse/issues/52100 | https://github.com/ClickHouse/ClickHouse/pull/52135 | 3b2b7d75c92fc8a49e52e3aebc9c574fb88eabde | 9e7361a0f627bcacdf98de145f0fc80ea1883a9c | "2023-07-14T01:03:27Z" | c++ | "2023-07-15T19:14:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,066 | ["tests/queries/0_stateless/02813_system_licenses_base.reference", "tests/queries/0_stateless/02813_system_licenses_base.sql", "utils/list-licenses/list-licenses.sh"] | Credits to poco disappeared from system.licenses | ```
SELECT
library_name,
license_type
FROM system.licenses
WHERE library_name ILIKE '%poco%'
```
now is empty.
Happened after https://github.com/ClickHouse/ClickHouse/pull/46075
We need to put the poco license back, because it's fair and mandatory according to their license.
"The copyright notices in the Software and this entire statement, including the above license grant, this restriction and the following disclaimer, must be included in all copies of the Software" | https://github.com/ClickHouse/ClickHouse/issues/52066 | https://github.com/ClickHouse/ClickHouse/pull/52127 | 758ec9fa9264d80d672a9f6fe0600c89a2da3aa2 | 89d646cef0e7dc3fa9e97ac32fbb61b70fc502b7 | "2023-07-12T15:08:43Z" | c++ | "2023-07-22T17:54:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,057 | ["src/Functions/FunctionsComparison.h"] | ClickHouse Server 23.7.1.857 terminated by Null Pointer Access after running a SQL statement | **Describe the bug**
The ClickHouse server terminated by Null Pointer Access after running a SQL statement.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT
x
FROM
(
SELECT
(
WITH x AS
(
SELECT 1 AS x
)
SELECT 1 NOT IN (
SELECT 1
) AS x
FROM x
INNER JOIN x USING (x)
) AS x
FROM
(
SELECT -1 AS x
)
) AS x
WHERE (* NOT IN (1)) = 1
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` and `clickhouse/clickhouse-server:latest`).
The log:
```
[a75f227bd0b9] 2023.07.12 09:38:23.474449 [ 404 ] <Fatal> BaseDaemon: ########################################
[a75f227bd0b9] 2023.07.12 09:38:23.474510 [ 404 ] <Fatal> BaseDaemon: (version 23.7.1.857 (official build), build id: EBF7F00D257A03E61ABF10B376C4A7BDD0EED2BA, git hash: 9aef39a7888e10995ea85faad559a110c0e22a82) (from thread 48) (query_id: 934c546b-51d2-4538-b58d-49007aeea07c) (query: SELECT
x
FROM
(
SELECT
(
WITH x AS
(
SELECT 1 AS x
)
SELECT 1 NOT IN (
SELECT 1
) AS x
FROM x
INNER JOIN x USING (x)
) AS x
FROM
(
SELECT -1 AS x
)
) AS x
WHERE (* NOT IN (1)) = 1) Received signal Segmentation fault (11)
[a75f227bd0b9] 2023.07.12 09:38:23.479126 [ 404 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
[a75f227bd0b9] 2023.07.12 09:38:23.479164 [ 404 ] <Fatal> BaseDaemon: Stack trace: 0x0000000013d197b5 0x000000000a082d64 0x000000000a06a4f7 0x0000000008857fea 0x000000000885758e 0x00000000127633ef 0x0000000012763e62 0x0000000012765159 0x0000000012eec78c 0x0000000014c7c844 0x0000000014ddcf12 0x000000001381ae40 0x000000001380bdf3 0x0000000013809734 0x00000000138a7bf6 0x00000000138a8b04 0x0000000013bd8153 0x0000000013bd422e 0x00000000149f8284 0x0000000014a0f759 0x000000001798e714 0x000000001798f931 0x0000000017b11727 0x0000000017b0f15c 0x00007f8cb5a5a609 0x00007f8cb597f133
[a75f227bd0b9] 2023.07.12 09:38:23.479268 [ 404 ] <Fatal> BaseDaemon: 2. DB::ColumnNullable::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0x0000000013d197b5 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479336 [ 404 ] <Fatal> BaseDaemon: 3. ? @ 0x000000000a082d64 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479382 [ 404 ] <Fatal> BaseDaemon: 4. ? @ 0x000000000a06a4f7 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479441 [ 404 ] <Fatal> BaseDaemon: 5. ? @ 0x0000000008857fea in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479506 [ 404 ] <Fatal> BaseDaemon: 6. ? @ 0x000000000885758e in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479570 [ 404 ] <Fatal> BaseDaemon: 7. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x00000000127633ef in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479635 [ 404 ] <Fatal> BaseDaemon: 8. DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000012763e62 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479710 [ 404 ] <Fatal> BaseDaemon: 9. DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000012765159 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479827 [ 404 ] <Fatal> BaseDaemon: 10. DB::ActionsDAG::updateHeader(DB::Block) const @ 0x0000000012eec78c in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479898 [ 404 ] <Fatal> BaseDaemon: 11. DB::FilterTransform::transformHeader(DB::Block, DB::ActionsDAG const*, String const&, bool) @ 0x0000000014c7c844 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.479977 [ 404 ] <Fatal> BaseDaemon: 12. DB::FilterStep::FilterStep(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&, String, bool) @ 0x0000000014ddcf12 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480025 [ 404 ] <Fatal> BaseDaemon: 13. DB::InterpreterSelectQuery::executeWhere(DB::QueryPlan&, std::shared_ptr<DB::ActionsDAG> const&, bool) @ 0x000000001381ae40 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480054 [ 404 ] <Fatal> BaseDaemon: 14. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x000000001380bdf3 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480092 [ 404 ] <Fatal> BaseDaemon: 15. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000013809734 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480135 [ 404 ] <Fatal> BaseDaemon: 16. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x00000000138a7bf6 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480207 [ 404 ] <Fatal> BaseDaemon: 17. DB::InterpreterSelectWithUnionQuery::execute() @ 0x00000000138a8b04 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480263 [ 404 ] <Fatal> BaseDaemon: 18. ? @ 0x0000000013bd8153 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480317 [ 404 ] <Fatal> BaseDaemon: 19. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x0000000013bd422e in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480372 [ 404 ] <Fatal> BaseDaemon: 20. DB::TCPHandler::runImpl() @ 0x00000000149f8284 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480422 [ 404 ] <Fatal> BaseDaemon: 21. DB::TCPHandler::run() @ 0x0000000014a0f759 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480478 [ 404 ] <Fatal> BaseDaemon: 22. Poco::Net::TCPServerConnection::start() @ 0x000000001798e714 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480528 [ 404 ] <Fatal> BaseDaemon: 23. Poco::Net::TCPServerDispatcher::run() @ 0x000000001798f931 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480571 [ 404 ] <Fatal> BaseDaemon: 24. Poco::PooledThread::run() @ 0x0000000017b11727 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480634 [ 404 ] <Fatal> BaseDaemon: 25. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000017b0f15c in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 09:38:23.480676 [ 404 ] <Fatal> BaseDaemon: 26. ? @ 0x00007f8cb5a5a609 in ?
[a75f227bd0b9] 2023.07.12 09:38:23.480726 [ 404 ] <Fatal> BaseDaemon: 27. clone @ 0x00007f8cb597f133 in ?
[a75f227bd0b9] 2023.07.12 09:38:23.665405 [ 404 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: C8DB2CC6CEEB111C8AB95A0F4BA5538E)
[a75f227bd0b9] 2023.07.12 09:38:23.665926 [ 404 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[a75f227bd0b9] 2023.07.12 09:38:23.666124 [ 404 ] <Fatal> BaseDaemon: No settings were changed
``` | https://github.com/ClickHouse/ClickHouse/issues/52057 | https://github.com/ClickHouse/ClickHouse/pull/52172 | cf0b58a831472aed9b475ce1730e681506193626 | 8d10dd71f7e86c431bb9eeb76a1dee182dff8f1e | "2023-07-12T09:39:36Z" | c++ | "2023-07-18T15:53:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,055 | ["src/Processors/QueryPlan/Optimizations/optimizeUseAggregateProjection.cpp", "tests/queries/0_stateless/01710_minmax_count_projection_count_nullable.reference", "tests/queries/0_stateless/01710_minmax_count_projection_count_nullable.sql"] | Count over Nullable LowCardinality column | > You have to provide the following information whenever possible.
**Describe what's wrong**
`count` aggregate function provides incosistent results over the `LowCardinality(Nullable(...))` column
> A clear and concise description of what works not as it is supposed to.
> A link to reproducer in [https://fiddle.clickhouse.com/](https://fiddle.clickhouse.com/).
**Does it reproduce on recent release?**
yes
**Enable crash reporting**
**How to reproduce**
* Which ClickHouse server version to use
`23.6.1.1362`
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
```sql
CREATE TABLE default.test
(
`val` LowCardinality(Nullable(String))
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS index_granularity = 8192;
```
* Sample data for all these tables, use [clickhouse-obfuscator]
```sql
insert into test select number == 3 ? 'some value' : null from numbers(5);
--ββvalβββββββββ
--β α΄Ία΅α΄Έα΄Έ β
--β α΄Ία΅α΄Έα΄Έ β
--β α΄Ία΅α΄Έα΄Έ β
--β some value β
--β α΄Ία΅α΄Έα΄Έ β
--ββββββββββββββ
```
* Queries to run that lead to unexpected result
```sql
SELECT count(val)
FROM test
Query id: d2061bd9-b54c-468b-9375-a8b3a591cb64
ββcount(val)ββ
β 5 β
ββββββββββββββ
SELECT
count(val),
sum(val IS NOT NULL)
FROM test
Query id: be33b9a7-1afd-41ae-be4b-955babaf39f6
ββcount(val)ββ¬βsum(isNotNull(val))ββ
β 1 β 1 β
ββββββββββββββ΄ββββββββββββββββββββββ
```
**Expected behavior**
count must return the same value for both of the select queries
> A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
> If applicable, add screenshots to help explain your problem.
**Additional context**
> Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/52055 | https://github.com/ClickHouse/ClickHouse/pull/52297 | 46122bdf4a6ba086d29f677a9cda355af3d77ca4 | 758ec9fa9264d80d672a9f6fe0600c89a2da3aa2 | "2023-07-12T09:22:41Z" | c++ | "2023-07-22T17:39:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,049 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/GetAggregatesVisitor.cpp", "src/Interpreters/GetAggregatesVisitor.h", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/TreeRewriter.h", "tests/queries/0_stateless/02814_order_by_tuple_window_function.reference", "tests/queries/0_stateless/02814_order_by_tuple_window_function.sql"] | ClickHouse Server 23.7.1.857 terminated by SIGABRT through create table and select stmts | **Describe the bug**
ClickHouse 23.7.1.857 Server is terminated by SIGABRT through create table and select statements.
**How to reproduce**
The SQL statement to reproduce:
```SQL
CREATE TABLE v0 ( v1 Nullable ( Int32 ) , col2 String , col3 Int32 , col4 Int32 ) ENGINE = MergeTree ( ) ORDER BY tuple ( ) ;
SELECT * FROM v0 ORDER BY tuple ( count ( * ) OVER ( ) , v1 , v1 , v0 . v1 ) ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` and `clickhouse/clickhouse-server:latest`).
The log traced by ClickHouse Server:
```
[a75f227bd0b9] 2023.07.12 04:14:15.507802 [ 356 ] <Fatal> BaseDaemon: ########################################
[a75f227bd0b9] 2023.07.12 04:14:15.507854 [ 356 ] <Fatal> BaseDaemon: (version 23.7.1.857 (official build), build id: EBF7F00D257A03E61ABF10B376C4A7BDD0EED2BA, git hash: 9aef39a7888e10995ea85faad559a110c0e22a82) (from thread 48) (query_id: 84622d16-135f-4140-8cec-4fa60a717536) (query: SELECT * FROM v0 ORDER BY tuple ( count ( * ) OVER ( ) , v1 , v1 , v0 . v1 ) ;) Received signal Aborted (6)
[a75f227bd0b9] 2023.07.12 04:14:15.507884 [ 356 ] <Fatal> BaseDaemon:
[a75f227bd0b9] 2023.07.12 04:14:15.507909 [ 356 ] <Fatal> BaseDaemon: Stack trace: 0x00007f63bb9b800b 0x00007f63bb997859 0x000000001a75c1a4 0x000000001a77a42d 0x00000000130db5a7 0x00000000130e2f56 0x000000001380df4c 0x0000000013805ffa 0x00000000137ffa3d 0x00000000138a6066 0x00000000138a3c73 0x00000000137b8f9e 0x0000000013bd79d6 0x0000000013bd422e 0x00000000149f8284 0x0000000014a0f759 0x000000001798e714 0x000000001798f931 0x0000000017b11727 0x0000000017b0f15c 0x00007f63bbb6f609 0x00007f63bba94133
[a75f227bd0b9] 2023.07.12 04:14:15.507945 [ 356 ] <Fatal> BaseDaemon: 2. raise @ 0x00007f63bb9b800b in ?
[a75f227bd0b9] 2023.07.12 04:14:15.507970 [ 356 ] <Fatal> BaseDaemon: 3. abort @ 0x00007f63bb997859 in ?
[a75f227bd0b9] 2023.07.12 04:14:15.508010 [ 356 ] <Fatal> BaseDaemon: 4. ? @ 0x000000001a75c1a4 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508056 [ 356 ] <Fatal> BaseDaemon: 5. ? @ 0x000000001a77a42d in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508096 [ 356 ] <Fatal> BaseDaemon: 6. DB::SelectQueryExpressionAnalyzer::appendExpressionsAfterWindowFunctions(DB::ExpressionActionsChain&, bool) @ 0x00000000130db5a7 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508135 [ 356 ] <Fatal> BaseDaemon: 7. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::shared_ptr<DB::FilterDAGInfo> const&, std::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x00000000130e2f56 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508167 [ 356 ] <Fatal> BaseDaemon: 8. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x000000001380df4c in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508187 [ 356 ] <Fatal> BaseDaemon: 9. ? @ 0x0000000013805ffa in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508228 [ 356 ] <Fatal> BaseDaemon: 10. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x00000000137ffa3d in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508283 [ 356 ] <Fatal> BaseDaemon: 11. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::shared_ptr<DB::IAST> const&, std::vector<String, std::allocator<String>> const&) @ 0x00000000138a6066 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508379 [ 356 ] <Fatal> BaseDaemon: 12. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x00000000138a3c73 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508465 [ 356 ] <Fatal> BaseDaemon: 13. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x00000000137b8f9e in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508519 [ 356 ] <Fatal> BaseDaemon: 14. ? @ 0x0000000013bd79d6 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508594 [ 356 ] <Fatal> BaseDaemon: 15. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x0000000013bd422e in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508663 [ 356 ] <Fatal> BaseDaemon: 16. DB::TCPHandler::runImpl() @ 0x00000000149f8284 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508748 [ 356 ] <Fatal> BaseDaemon: 17. DB::TCPHandler::run() @ 0x0000000014a0f759 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508820 [ 356 ] <Fatal> BaseDaemon: 18. Poco::Net::TCPServerConnection::start() @ 0x000000001798e714 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.508885 [ 356 ] <Fatal> BaseDaemon: 19. Poco::Net::TCPServerDispatcher::run() @ 0x000000001798f931 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.511404 [ 356 ] <Fatal> BaseDaemon: 20. Poco::PooledThread::run() @ 0x0000000017b11727 in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.511441 [ 356 ] <Fatal> BaseDaemon: 21. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000017b0f15c in /usr/bin/clickhouse
[a75f227bd0b9] 2023.07.12 04:14:15.511492 [ 356 ] <Fatal> BaseDaemon: 22. ? @ 0x00007f63bbb6f609 in ?
[a75f227bd0b9] 2023.07.12 04:14:15.511557 [ 356 ] <Fatal> BaseDaemon: 23. clone @ 0x00007f63bba94133 in ?
[a75f227bd0b9] 2023.07.12 04:14:15.700024 [ 356 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: C8DB2CC6CEEB111C8AB95A0F4BA5538E)
[a75f227bd0b9] 2023.07.12 04:14:15.700392 [ 356 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[a75f227bd0b9] 2023.07.12 04:14:15.700532 [ 356 ] <Fatal> BaseDaemon: No settings were changed
```
| https://github.com/ClickHouse/ClickHouse/issues/52049 | https://github.com/ClickHouse/ClickHouse/pull/52145 | 3b0a4040cd1c3864277e22bb805e26823c0194a1 | 9bf114f9a36556d6f227dea0fa91be131ee99710 | "2023-07-12T04:19:07Z" | c++ | "2023-07-16T14:43:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,020 | ["src/AggregateFunctions/AggregateFunctionGroupArrayMoving.cpp", "src/AggregateFunctions/AggregateFunctionGroupArrayMoving.h", "tests/queries/0_stateless/02817_group_array_moving_zero_window_size.reference", "tests/queries/0_stateless/02817_group_array_moving_zero_window_size.sql"] | ClickHouse Server receives SIGFPE after running a SELECT statement | **Describe the bug**
The ClickHouse server received signal Arithmetic exception (8) due to "Integer divide by zero" after running a SQL statement.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT groupArrayMovingAvg ( toInt64 ( 0 ) ) ( toDecimal32 ( 1 , 1 ) );
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` and `clickhouse/clickhouse-server:latest`).
The log:
```
SELECT groupArrayMovingAvg(toInt64(0))(toDecimal32(1, 1 AS is_empty))
FROM numbers(300)
Query id: fae5edc7-50d4-4462-872b-9530457300ee
[50992739ea1d] 2023.07.10 13:52:59.888348 [ 388 ] <Fatal> BaseDaemon: ########################################
[50992739ea1d] 2023.07.10 13:52:59.888405 [ 388 ] <Fatal> BaseDaemon: (version 23.7.1.857 (official build), build id: EBF7F00D257A03E61ABF10B376C4A7BDD0EED2BA, git hash: 9aef39a7888e10995ea85faad559a110c0e22a82) (from thread 349) (query_id: fae5edc7-50d4-4462-872b-9530457300ee) (query: SELECT groupArrayMovingAvg ( toInt64 ( 0 ) ) ( toDecimal32 ( 1 , 1 AS is_empty ) ) FROM numbers ( 300 ) ;) Received signal Arithmetic exception (8)
[50992739ea1d] 2023.07.10 13:52:59.888444 [ 388 ] <Fatal> BaseDaemon: Integer divide by zero.
[50992739ea1d] 2023.07.10 13:52:59.888472 [ 388 ] <Fatal> BaseDaemon: Stack trace: 0x00000000081aa9af 0x0000000010a90a87 0x0000000010abee26 0x00000000131a426d 0x00000000131a3e6b 0x0000000014c5c790 0x0000000014a5d1e9 0x0000000014a53e50 0x0000000014a54fe3 0x000000000e29d8af 0x000000000e2a05fc 0x000000000e2991ef 0x000000000e29f1e1 0x00007f92b3447609 0x00007f92b336c133
[50992739ea1d] 2023.07.10 13:52:59.888529 [ 388 ] <Fatal> BaseDaemon: 2. ? @ 0x00000000081aa9af in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888566 [ 388 ] <Fatal> BaseDaemon: 3. ? @ 0x0000000010a90a87 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888592 [ 388 ] <Fatal> BaseDaemon: 4. ? @ 0x0000000010abee26 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888624 [ 388 ] <Fatal> BaseDaemon: 5. ? @ 0x00000000131a426d in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888669 [ 388 ] <Fatal> BaseDaemon: 6. DB::Aggregator::prepareBlockAndFillWithoutKey(DB::AggregatedDataVariants&, bool, bool) const @ 0x00000000131a3e6b in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888701 [ 388 ] <Fatal> BaseDaemon: 7. ? @ 0x0000000014c5c790 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888733 [ 388 ] <Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x0000000014a5d1e9 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888761 [ 388 ] <Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000014a53e50 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888797 [ 388 ] <Fatal> BaseDaemon: 10. ? @ 0x0000000014a54fe3 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888858 [ 388 ] <Fatal> BaseDaemon: 11. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000e29d8af in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888931 [ 388 ] <Fatal> BaseDaemon: 12. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000e2a05fc in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888962 [ 388 ] <Fatal> BaseDaemon: 13. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000e2991ef in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.888991 [ 388 ] <Fatal> BaseDaemon: 14. ? @ 0x000000000e29f1e1 in /usr/bin/clickhouse
[50992739ea1d] 2023.07.10 13:52:59.889042 [ 388 ] <Fatal> BaseDaemon: 15. ? @ 0x00007f92b3447609 in ?
[50992739ea1d] 2023.07.10 13:52:59.889075 [ 388 ] <Fatal> BaseDaemon: 16. clone @ 0x00007f92b336c133 in ?
[50992739ea1d] 2023.07.10 13:53:00.072033 [ 388 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: C8DB2CC6CEEB111C8AB95A0F4BA5538E)
[50992739ea1d] 2023.07.10 13:53:00.072250 [ 388 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[50992739ea1d] 2023.07.10 13:53:00.072353 [ 388 ] <Fatal> BaseDaemon: No settings were changed
``` | https://github.com/ClickHouse/ClickHouse/issues/52020 | https://github.com/ClickHouse/ClickHouse/pull/52161 | 6d0ed2963aebf6bde4057fcfcab0ce8fac3a5ecf | 416433d40280b65c13d47e6d3204db4849ae809e | "2023-07-10T14:04:02Z" | c++ | "2023-07-18T23:03:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,962 | ["programs/install/Install.cpp"] | init.d behaves strange with /etc/default/clickhouse and systemctl | if I use systemctl, `/etc/default/clickhouse` is used
if I use init.d `/etc/default/clickhouse` is not used, though it should use the same systemctl and it loads /etc/default/clickhouse
```
cat /etc/default/clickhouse
CLICKHOUSE_WATCHDOG_ENABLE=0
systemctl restart clickhouse-server
ps -ef
clickho+ 1671394 1 7 20:22 ? 00:00:00 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
no watchdog.
/etc/init.d/clickhouse-server stop
ps -ef
no clickhouse
/etc/init.d/clickhouse-server start
ps -ef
clickho+ 1671707 1 0 20:23 ? 00:00:00 clickhouse-watchdog --config-file /etc/clickhouse-server/config.xml --pid-file /var/run/clickhouse-server/clickhouse-server.pid --daemon
clickho+ 1671708 1671707 13 20:23 ? 00:00:00 /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml --pid-file /var/run/clickhouse-server/clickhouse-server.pid --daemon
/etc/init.d/clickhouse-server stop
systemctl stop clickhouse-server
``` | https://github.com/ClickHouse/ClickHouse/issues/51962 | https://github.com/ClickHouse/ClickHouse/pull/53418 | 39e481934c8cf99fe698c9b5d109d8b9ab2464c6 | 5d0a05ef196e7f68100581f28f6df523f4607d3e | "2023-07-07T20:27:50Z" | c++ | "2023-08-14T18:29:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,960 | ["tests/queries/0_stateless/02901_parallel_replicas_rollup.reference", "tests/queries/0_stateless/02901_parallel_replicas_rollup.sh"] | Parallel replicas: wrong block size | 
```
SET use_hedged_requests = false, max_parallel_replicas = 3, cluster_for_parallel_replicas = 'parallel_replicas', allow_experimental_parallel_reading_from_replicas = 1, parallel_replicas_for_non_replicated_merge_tree = 1;
CREATE TABLE nested (x UInt8) ENGINE = MergeTree ORDER BY ();
INSERT INTO nested VALUES (1);
SELECT 1 FROM nested
GROUP BY 1 WITH ROLLUP
ORDER BY max((SELECT 1 WHERE 0));
```
https://fiddle.clickhouse.com/e340a1e8-5ec2-496e-8fd4-5518199fb16a
| https://github.com/ClickHouse/ClickHouse/issues/51960 | https://github.com/ClickHouse/ClickHouse/pull/55886 | fd4df9a38ac8ef8e8eedad7e7d9eabd4571b2a90 | 4eb335eb4ca20da3022d7bbee440803045aca5c3 | "2023-07-07T19:28:45Z" | c++ | "2023-10-23T09:23:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,918 | ["src/AggregateFunctions/AggregateFunctionMinMaxAny.h", "tests/queries/0_stateless/02812_subquery_operators.reference", "tests/queries/0_stateless/02812_subquery_operators.sql"] | nullptr dereference in subquery operators | Originally logical error in the test, and a segmentation fault in Fiddle.
**Test:** https://s3.amazonaws.com/clickhouse-test-reports/51881/c7ccf23a24a7fb2bb1245b76fc9169649cd474c3/stateless_tests__debug__[5_5].html
**Fiddle:** https://fiddle.clickhouse.com/1e170d25-91cf-4730-918f-045a407efec5
**Logs:**
```[b302d84f6a02] 2023.07.07 02:04:23.262906 [ 326 ] <Fatal> BaseDaemon: ########################################
[b302d84f6a02] 2023.07.07 02:04:23.262972 [ 326 ] <Fatal> BaseDaemon: (version 23.6.1.1524 (official build), build id: 39AC174BEE13B59373FF7F29F65984557B4408B0, git hash: d1c7e13d08868cb04d3562dcced704dd577cb1df) (from thread 322) (query_id: c5d7191d-6a92-488f-a289-c961b19c9d3f) (query: SELECT '', ['\0'], [], singleValueOrNull(( SELECT '\0' ) ), [''];) Received signal Segmentation fault (11)
[b302d84f6a02] 2023.07.07 02:04:23.263002 [ 326 ] <Fatal> BaseDaemon: Address: 0x7f72dabdaff8. Access: read. Address not mapped to object.
[b302d84f6a02] 2023.07.07 02:04:23.263030 [ 326 ] <Fatal> BaseDaemon: Stack trace: 0x0000000010b23cae 0x0000000010b226b1 0x000000001313ca7f 0x000000001313c253 0x0000000014e9ece8 0x0000000014e9aac7 0x0000000014ca3bc9 0x0000000014c9a890 0x0000000014c9ba23 0x000000000e2d5be3 0x000000000e2d83f5 0x000000000e2d1a74 0x000000000e2d7281 0x00007f738746f609 0x00007f7387394133
[b302d84f6a02] 2023.07.07 02:04:23.263100 [ 326 ] <Fatal> BaseDaemon: 2. ? @ 0x0000000010b23cae in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263125 [ 326 ] <Fatal> BaseDaemon: 3. ? @ 0x0000000010b226b1 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263146 [ 326 ] <Fatal> BaseDaemon: 4. ? @ 0x000000001313ca7f in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263191 [ 326 ] <Fatal> BaseDaemon: 5. DB::Aggregator::executeOnBlock(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*>>&, std::vector<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*>>, std::allocator<std::vector<DB::IColumn const*, std::allocator<DB::IColumn const*>>>>&, bool&) const @ 0x000000001313c253 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263223 [ 326 ] <Fatal> BaseDaemon: 6. DB::AggregatingTransform::consume(DB::Chunk) @ 0x0000000014e9ece8 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263248 [ 326 ] <Fatal> BaseDaemon: 7. DB::AggregatingTransform::work() @ 0x0000000014e9aac7 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263273 [ 326 ] <Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x0000000014ca3bc9 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263301 [ 326 ] <Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000014c9a890 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263326 [ 326 ] <Fatal> BaseDaemon: 10. ? @ 0x0000000014c9ba23 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263358 [ 326 ] <Fatal> BaseDaemon: 11. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000e2d5be3 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263392 [ 326 ] <Fatal> BaseDaemon: 12. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000e2d83f5 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263419 [ 326 ] <Fatal> BaseDaemon: 13. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000e2d1a74 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263436 [ 326 ] <Fatal> BaseDaemon: 14. ? @ 0x000000000e2d7281 in /usr/bin/clickhouse
[b302d84f6a02] 2023.07.07 02:04:23.263458 [ 326 ] <Fatal> BaseDaemon: 15. ? @ 0x00007f738746f609 in ?
[b302d84f6a02] 2023.07.07 02:04:23.263495 [ 326 ] <Fatal> BaseDaemon: 16. __clone @ 0x00007f7387394133 in ?
[b302d84f6a02] 2023.07.07 02:04:23.391920 [ 326 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 2CCE0342490FE96D02C1130FB4CA7EE2)
[b302d84f6a02] 2023.07.07 02:04:23.392087 [ 326 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[b302d84f6a02] 2023.07.07 02:04:23.392184 [ 326 ] <Fatal> BaseDaemon: Changed settings: output_format_pretty_color = false, output_format_pretty_grid_charset = 'ASCII'
```
| https://github.com/ClickHouse/ClickHouse/issues/51918 | https://github.com/ClickHouse/ClickHouse/pull/51922 | b958499c27df57b9ffff05a1efbbcad14591ba20 | 781d5e3aa397c5e83533e15a52404dbdce613a00 | "2023-07-07T02:12:43Z" | c++ | "2023-07-08T07:37:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,846 | ["src/Storages/StorageMerge.cpp", "src/Storages/StorageMerge.h", "tests/queries/0_stateless/02875_merge_engine_set_index.reference", "tests/queries/0_stateless/02875_merge_engine_set_index.sh"] | IN operator much slower on 23.6.1 than on 23.4.2.11 | (you don't have to strictly follow this form)
**Describe the unexpected behaviour**
I have a merge table (say) t_merge, and a memory table t_memory.
on version 23.4.2.11, I use IN operator to query records from t_merge, by "where (xx, xxx) in t_memory). It is really quick than "t_merge left semi join t_memory". that is 100ms vs 6s.
but when upgrade to version 23.6.1, the IN operator slow down to 6s, almost same as "left semi join".
what is wrong?
**How to reproduce**
* Which ClickHouse server version to use - 23.6.1
* Which interface to use, if matters - JetBrains DataGrip with http
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
**Expected behavior**
IN operator performance better.
**Error message and/or stacktrace**
If applicable, add screenshots to help explain your problem.
**Additional context**
there is some signal may related:
that seems clickhouse server return rows quickly just as former version, (not sure all or just part of them). then after several seconds it show query finished.
maybe server finished work already, but block for a while then finish the query process?
| https://github.com/ClickHouse/ClickHouse/issues/51846 | https://github.com/ClickHouse/ClickHouse/pull/54905 | 49f9b6d34009459da8a3ee5627685ad21b453446 | c643069b0d91c965c6cc0255ed3d5ff976a2d430 | "2023-07-05T14:35:27Z" | c++ | "2023-11-09T13:52:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,811 | ["src/Storages/checkAndGetLiteralArgument.cpp", "tests/queries/0_stateless/02811_invalid_embedded_rocksdb_create.reference", "tests/queries/0_stateless/02811_invalid_embedded_rocksdb_create.sql"] | Wrong create sql with EmbeddedRocksDB engine can crash the server | > You have to provide the following information whenever possible.
Reproduce SQL:
```sql
CREATE TABLE dict
(
`k` String,
`v` String
)
ENGINE = EmbeddedRocksDB(k)
PRIMARY KEY k
```
https://fiddle.clickhouse.com/b8df4e3e-110b-43ad-838f-da6a1d5636e8
**Does it reproduce on recent release?**
Yes
| https://github.com/ClickHouse/ClickHouse/issues/51811 | https://github.com/ClickHouse/ClickHouse/pull/51847 | fbdb8f387e98a5efeb2a713db38afee2aa581391 | 236ee81266e3e8d842fe611e4f592c13b582d33d | "2023-07-05T06:46:47Z" | c++ | "2023-07-07T23:50:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,782 | ["docs/en/sql-reference/statements/show.md", "src/Interpreters/InterpreterShowIndexesQuery.cpp", "tests/queries/0_stateless/02724_show_indexes.reference"] | MySQL compatibility: SHOW KEYS output | **Describe the unexpected behaviour**
BI tools such as QuickSight use `SHOW KEYS` queries to determine the primary keys of the dataset.
The output of the `SHOW KEYS` command is different from a real MySQL server.
"Real" MySQL:
```
mysql> SHOW KEYS FROM `cell_towers` FROM `mysql`;
+-------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | Visible | Expression |
+-------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
| cell_towers | 0 | PRIMARY | 1 | radio | A | 0 | NULL | NULL | | BTREE | | | YES | NULL |
| cell_towers | 0 | PRIMARY | 2 | mcc | A | 0 | NULL | NULL | | BTREE | | | YES | NULL |
| cell_towers | 0 | PRIMARY | 3 | net | A | 0 | NULL | NULL | | BTREE | | | YES | NULL |
| cell_towers | 0 | PRIMARY | 4 | created | A | 0 | NULL | NULL | | BTREE | | | YES | NULL |
+-------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+------------+
```
ClickHouse via MySQL interface:
```
mysql> SHOW KEYS FROM `cell_towers` FROM `default`;
+-------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+--------------------------+
| table | non_unique | key_name | seq_in_index | column_name | collation | cardinality | sub_part | packed | null | index_type | comment | index_comment | visible | expression |
+-------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+--------------------------+
| cell_towers | 0 | PRIMARY | NULL | NULL | A | NULL | NULL | NULL | NULL | primary | NULL | NULL | YES | radio, mcc, net, created |
+-------------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+---------+--------------------------+
```
**How to reproduce**
* Which ClickHouse server version to use: 23.6
* Which interface to use, if it matters: MySQL
* Non-default settings, if any: I added `use_mysql_types_in_show_columns` to the default profile.
```xml
<profiles>
<default>
<load_balancing>random</load_balancing>
<use_mysql_types_in_show_columns>1</use_mysql_types_in_show_columns>
</default>
</profiles>
```
* `CREATE TABLE` statements for all tables involved: [cell towers dataset](https://clickhouse.com/docs/en/getting-started/example-datasets/cell-towers) for ClickHouse, and a simple table like
```
create table cell_towers (radio TEXT, mcc INTEGER, net INTEGER, created DATETIME, PRIMARY KEY (radio, mcc, net, created));
```
for MySQL
* Queries to run that lead to unexpected result
```
SHOW KEYS FROM `cell_towers` FROM `default`;
```
**Expected behavior**
* The primary key is reported as a single row, whereas a composite key in MySQL is reported as 1 row per column
* There are NULLs in the output instead of numbers and/or empty strings in a real MySQL, and that supposedly causes QuickSight errors.
CC @tpanetti | https://github.com/ClickHouse/ClickHouse/issues/51782 | https://github.com/ClickHouse/ClickHouse/pull/51796 | d0cb6596e1ca101fc8aa20ac1d0f43cab35e4b2d | 2d89dba8d1d142bab54a2b17e5ad6e818de3a748 | "2023-07-04T13:59:35Z" | c++ | "2023-07-21T22:22:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,762 | ["src/Dictionaries/CacheDictionary.cpp"] | cache dictionary request the same key lot of times | then one single block of data processed by dictGet have a lot of duplicate keys ClickHouse will construct suboptimal query like
```sql
SELECT ... FROM dict_source WHERE `id` IN (1000, 1000, ..., 1000, 1000, 1000);
```
instead of simple
```sql
SELECT ... FROM dict_source WHERE `id` IN (1000);
```
[Repro](https://fiddle.clickhouse.com/bcfe9605-560b-4b85-8c2e-7029fa6f67f5):
```sql
DROP TABLE IF EXISTS dict_source;
DROP DICTIONARY IF EXISTS test_cache_dict;
CREATE TABLE dict_source engine = Log AS SELECT number as id, toString(number) as value FROM numbers(10000);
CREATE DICTIONARY IF NOT EXISTS test_cache_dict (
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(
CLICKHOUSE(
host '127.0.0.2'
DB currentDatabase()
TABLE 'dict_source'
)
)
LAYOUT(
CACHE(SIZE_IN_CELLS 10000)
)
LIFETIME(MIN 1 MAX 100);
SELECT dictGet(test_cache_dict, 'value', materialize(toUInt64(1000))) FROM numbers_mt(1000) SETTINGS max_block_size = 50, max_threads = 4 FORMAT Null;
system flush logs;
SELECT event_time, query FROM system.query_log WHERE event_time > now() - 300 and has(tables, currentDatabase() || '.dict_source') and type = 'QueryFinish' and type = 'QueryFinish' and query_kind='Select' ORDER BY event_time DESC LIMIT 10;
SELECT dict_key, count(query_id) number_of_queries_to_source, sum(count_per_query) as sum_key_requests, max(count_per_query) as max_key_requests_per_query FROM (
SELECT arrayJoin(splitByRegexp(',\s+', extract(query, 'IN \((.*)\);'))) as dict_key, query_id, count() count_per_query FROM system.query_log WHERE event_time > now() - 300 and has(tables, currentDatabase() || '.dict_source') and type = 'QueryFinish' and query_kind='Select' GROUP BY dict_key, query_id) GROUP BY dict_key FORMAT PrettyCompactMonoBlock;
```
| https://github.com/ClickHouse/ClickHouse/issues/51762 | https://github.com/ClickHouse/ClickHouse/pull/51853 | 5d385f5f14bffc93b683cfff016d1c4f6eb7f107 | 1343e5cc452f1ca23e3dce3c03856e01f5051648 | "2023-07-04T09:26:39Z" | c++ | "2023-07-08T18:58:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,753 | ["src/IO/ReadHelpers.cpp", "src/IO/ReadHelpers.h", "tests/queries/0_stateless/01268_DateTime64_in_WHERE.sql", "tests/queries/0_stateless/01702_toDateTime_from_string_clamping.reference", "tests/queries/0_stateless/02373_datetime64_monotonicity.queries", "tests/queries/0_stateless/02373_datetime64_monotonicity.reference", "tests/queries/0_stateless/02889_datetime64_from_string.reference", "tests/queries/0_stateless/02889_datetime64_from_string.sql"] | DateTime64 inconsistent parsing from String | DateTime64 parsing goes wrong for string representing a number smaller than 10000:
```
select toDateTime64('9999', 3) -- returns 1970-01-01 00:00:00.000, but CANNOT_PARSE_DATETIME expected
select toDateTime64('10000', 3) -- returns 1970-01-01 02:46:40.000 (expected)
```
However, DateTime doesn't have this:
```
select toDateTime('9999'); -- { serverError CANNOT_PARSE_DATETIME } (expected)
select toDateTime('10000') -- returns 1970-01-01 02:46:40 (expected)
```
See [fiddle](https://fiddle.clickhouse.com/a9c8deb1-ee2c-48f6-8b1f-2480a9fb00f9).
**Does it reproduce on recent release?**
Yes, on 23.6.1
Also happens on older versions, e.g. [22.6.7](https://fiddle.clickhouse.com/d5dfbf2c-9f5a-41ae-badb-648bfb0ca767)
**Expected behavior**
DateTime64 shall not silently produce wrong result. It shall throw exception like DateTime does.
Related to [these](https://github.com/ClickHouse/ClickHouse/issues/50868#issuecomment-1604947763) [two](https://github.com/ClickHouse/ClickHouse/issues/50868#issuecomment-1587324935) comments.
#### Parts:
- [x] fix parsing of decimal string numbers with short whole part (e.g. `'123.12'`) -- definitely no ambiguity here
- [x] allow negative numbers in short strings for DT64 -- also no ambiguity | https://github.com/ClickHouse/ClickHouse/issues/51753 | https://github.com/ClickHouse/ClickHouse/pull/55146 | 6068277af9725b88ec5e5a48ef38476bae1b7d8e | b4ab47380abd0837233385af7b853a8b7596660c | "2023-07-03T22:03:10Z" | c++ | "2023-10-25T10:32:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,706 | ["docs/_description_templates/template-data-type.md", "docs/_description_templates/template-engine.md", "docs/_description_templates/template-function.md", "docs/_description_templates/template-server-setting.md", "docs/_description_templates/template-setting.md", "docs/_description_templates/template-statement.md", "docs/_description_templates/template-system-table.md"] | 404 by the links to description templates | https://github.com/ClickHouse/ClickHouse/blob/master/docs/README.md#description-templates
All of the template links are 404, like on the screenshot.

| https://github.com/ClickHouse/ClickHouse/issues/51706 | https://github.com/ClickHouse/ClickHouse/pull/51747 | a14f4ae83bf1d73bb03abacb98729b3fef721f61 | 160c89650241bf05dab022247b2b24e3c633e7cf | "2023-07-02T08:47:48Z" | c++ | "2023-07-03T18:10:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,667 | ["tests/queries/0_stateless/02116_tuple_element.sql"] | [Analyzer] flaky test: 02116_tuple_element | https://s3.amazonaws.com/clickhouse-test-reports/0/3b7f6a43243913f302f8c19c664e07ac6ee790e4/stateless_tests__release__analyzer_.html
```
2023-06-30 04:58:10 Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Tuple doesn't have element with index '0': In scope SELECT t1.0 FROM t_tuple_element. (NOT_FOUND_COLUMN_IN_BLOCK)
2023-06-30 04:58:10 (query: SELECT tupleElement(t1, 0) FROM t_tuple_element; -- { serverError ILLEGAL_INDEX })
``` | https://github.com/ClickHouse/ClickHouse/issues/51667 | https://github.com/ClickHouse/ClickHouse/pull/51669 | 1ce99baf00c96196e88a950a517f98a5efc317c8 | d85f5cc4cf46aed7419feb82dffa085b392f6bff | "2023-06-30T10:16:45Z" | c++ | "2023-07-02T16:02:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,660 | ["src/Processors/QueryPlan/Optimizations/liftUpFunctions.cpp", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug.reference", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug.sql", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug_2.reference", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug_2.sql"] | Incorrect sort result after join | > You have to provide the following information whenever possible.
We have two tables with data below:
```
create table test1 (
`pt` String,
`brand_name` String,
`total_indirect_order_cnt` Float64,
`total_indirect_gmv` Float64
) ENGINE = Memory
create table test2 (
`pt` String,
`brand_name` String,
`exposure_uv` Float64,
`click_uv` Float64
) ENGINE = Memory
INSERT INTO test1 (`pt`, `brand_name`, `total_indirect_order_cnt`, `total_indirect_gmv`) VALUES
('20230625', 'LINING', 2232, 1008710),
('20230625', 'adidas', 125, 58820),
('20230625', 'Nike', 1291, 1033020),
('20230626', 'Nike', 1145, 938926),
('20230626', 'LINING', 1904, 853336),
('20230626', 'adidas', 133, 62546),
('20220626', 'LINING', 3747, 1855203),
('20220626', 'Nike', 2295, 1742665),
('20220626', 'adidas', 302, 122388);
INSERT INTO test2 (`pt`, `brand_name`, `exposure_uv`, `click_uv`) VALUES
('20230625', 'Nike', 2012913, 612831),
('20230625', 'adidas', 480277, 96176),
('20230625', 'LINING', 2474234, 627814),
('20230626', 'Nike', 1934666, 610770),
('20230626', 'adidas', 469904, 91117),
('20230626', 'LINING', 2285142, 599765),
('20220626', 'Nike', 2979656, 937166),
('20220626', 'adidas', 704751, 124250),
('20220626', 'LINING', 3163884, 1010221);
```
When we query using sql below:
```
SELECT * FROM (
SELECT m0.pt AS pt
,m0.`uvctr` AS uvctr
,round(m1.uvctr,4) AS uvctr_hb_last_value
,round(m2.uvctr,4) AS uvctr_tb_last_value
FROM
(
SELECT m0.pt AS pt
,COALESCE(m0.brand_name,m1.brand_name) AS brand_name
,if(isNaN(`click_uv` / `exposure_uv`) OR isInfinite(`click_uv` / `exposure_uv`),NULL,`click_uv` / `exposure_uv`) AS `uvctr`
FROM
(
SELECT pt AS pt
,brand_name AS `brand_name`
,exposure_uv AS `exposure_uv`
,click_uv AS `click_uv`
FROM test2
WHERE pt = '20230626'
) m0
FULL JOIN
(
SELECT pt AS pt
,brand_name AS `brand_name`
,total_indirect_order_cnt AS `total_indirect_order_cnt`
,total_indirect_gmv AS `total_indirect_gmv`
FROM test1
WHERE pt = '20230626'
) m1
ON m0.brand_name = m1.brand_name AND m0.pt = m1.pt
) m0
LEFT JOIN
(
SELECT m0.pt AS pt
,if(isNaN(`click_uv` / `exposure_uv`) OR isInfinite(`click_uv` / `exposure_uv`),NULL,`click_uv` / `exposure_uv`) AS `uvctr`
,COALESCE(m0.brand_name,m1.brand_name) AS brand_name
,`exposure_uv` AS `exposure_uv`
,`click_uv`
FROM
(
SELECT pt AS pt
,brand_name AS `brand_name`
,exposure_uv AS `exposure_uv`
,click_uv AS `click_uv`
FROM test2
WHERE pt = '20230625'
) m0
FULL JOIN
(
SELECT pt AS pt
,brand_name AS `brand_name`
,total_indirect_order_cnt AS `total_indirect_order_cnt`
,total_indirect_gmv AS `total_indirect_gmv`
FROM test1
WHERE pt = '20230625'
) m1
ON m0.brand_name = m1.brand_name AND m0.pt = m1.pt
) m1
ON m0.brand_name = m1.brand_name AND m0.pt = m1.pt
LEFT JOIN
(
SELECT m0.pt AS pt
,if(isNaN(`click_uv` / `exposure_uv`) OR isInfinite(`click_uv` / `exposure_uv`),NULL,`click_uv` / `exposure_uv`) AS `uvctr`
,COALESCE(m0.brand_name,m1.brand_name) AS brand_name
,`exposure_uv` AS `exposure_uv`
,`click_uv`
FROM
(
SELECT pt AS pt
,brand_name AS `brand_name`
,exposure_uv AS `exposure_uv`
,click_uv AS `click_uv`
FROM test2
WHERE pt = '20220626'
) m0
FULL JOIN
(
SELECT pt AS pt
,brand_name AS `brand_name`
,total_indirect_order_cnt AS `total_indirect_order_cnt`
,total_indirect_gmv AS `total_indirect_gmv`
FROM test1
WHERE pt = '20220626'
) m1
ON m0.brand_name = m1.brand_name AND m0.pt = m1.pt
) m2
ON m0.brand_name = m2.brand_name AND m0.pt = m2.pt
) c0
ORDER BY pt ASC, uvctr DESC
```
But the result is below:
```
20230626 0.3156979034107179 \N \N
20230626 0.19390556368960468 \N \N
20230626 0.2624629016490004 \N \N
```
| https://github.com/ClickHouse/ClickHouse/issues/51660 | https://github.com/ClickHouse/ClickHouse/pull/51481 | 008238be71340234586ca8ab93fe18845db91174 | 83e9ec117c04bb3db2ce05400857924e15cf0894 | "2023-06-30T09:45:43Z" | c++ | "2023-07-18T10:34:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,648 | ["src/Functions/FunctionsHashing.h", "tests/queries/0_stateless/02790_keyed_hash_bug.reference", "tests/queries/0_stateless/02790_keyed_hash_bug.sql"] | use-of-uninitialized-value: src/Columns/ColumnVector.cpp:440:9 | https://s3.amazonaws.com/clickhouse-test-reports/51641/a705b08bd81658e878d7b7d214b057c661bbed69/fuzzer_astfuzzermsan/report.html
```
==162==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x5581d4138256 in DB::ColumnVector<unsigned long>::get64(unsigned long) const build_docker/./src/Columns/ColumnVector.cpp:440:9
#1 0x5581a329678f in DB::impl::parseSipHashKey(DB::ColumnWithTypeAndName const&) FunctionsHashingMisc.cpp
#2 0x5581a33f3133 in DB::TargetSpecific::Default::FunctionAnyHash<DB::SipHash64KeyedImpl, true, DB::impl::SipHashKey>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x1139e133) (BuildId: f0df3dab103b1fc75daaab84587a1b3b78f41216)
#3 0x5581a326ef39 in DB::ImplementationSelector<DB::IFunction>::selectAndExecute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x11219f39) (BuildId: f0df3dab103b1fc75daaab84587a1b3b78f41216)
#4 0x5581a33f2ab1 in DB::FunctionAnyHash<DB::SipHash64KeyedImpl, true, DB::impl::SipHashKey>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x1139dab1) (BuildId: f0df3dab103b1fc75daaab84587a1b3b78f41216)
#5 0x5581a1735ae1 in DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0xf6e0ae1) (BuildId: f0df3dab103b1fc75daaab84587a1b3b78f41216)
#6 0x5581a1735417 in DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0xf6e0417) (BuildId: f0df3dab103b1fc75daaab84587a1b3b78f41216)
#7 0x5581ce0bf9b9 in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:245:15
#8 0x5581ce0c1aa4 in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:302:22
#9 0x5581ce0c7817 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:374:16
#10 0x5581cfbdb364 in DB::executeActionForHeader(DB::ActionsDAG::Node const*, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>>) build_docker/./src/Interpreters/ActionsDAG.cpp:537:49
#11 0x5581cfbdb364 in DB::ActionsDAG::updateHeader(DB::Block) const build_docker/./src/Interpreters/ActionsDAG.cpp:656:44
#12 0x5581d771f062 in DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:8:23
#13 0x5581d7c3722f in DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:31:9
#14 0x5581d2a59339 in std::__1::__unique_if<DB::ExpressionStep>::__unique_single std::__1::make_unique[abi:v15000]<DB::ExpressionStep, DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&>(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#15 0x5581d2a59339 in DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2795:28
#16 0x5581d2a37b67 in DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1811:17
#17 0x5581d2a304cc in DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:899:5
#18 0x5581d2c6a9ba in DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:303:38
#19 0x5581d2c6d2bc in DB::InterpreterSelectWithUnionQuery::execute() build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:377:5
#20 0x5581d381fc31 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:746:40
#21 0x5581d3813aa1 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1168:30
#22 0x5581d6cf88d7 in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:421:24
#23 0x5581d6d37c5e in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2057:9
#24 0x5581e15caddf in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#25 0x5581e15cbc41 in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#26 0x5581e1b38e45 in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#27 0x5581e1b35c6d in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#28 0x5581e1b32bd1 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#29 0x7f2863bc3608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
#30 0x7f2863ae8132 in __clone /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Uninitialized value was created by a heap allocation
#0 0x5581a16c73e2 in malloc (/workspace/clickhouse+0xf6723e2) (BuildId: f0df3dab103b1fc75daaab84587a1b3b78f41216)
#1 0x5581b9a9a638 in Allocator<false, false>::allocNoTrack(unsigned long, unsigned long) build_docker/./src/Common/Allocator.h:237:27
#2 0x5581b9a9a1c9 in Allocator<false, false>::alloc(unsigned long, unsigned long) build_docker/./src/Common/Allocator.h:103:16
#3 0x5581b9b92b5a in void DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::alloc<>(unsigned long) build_docker/./src/Common/PODArray.h:131:65
#4 0x5581b9b92b5a in DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::alloc_for_num_elements(unsigned long) build_docker/./src/Common/PODArray.h:125:9
#5 0x5581b9b92b5a in DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul>::PODArray(unsigned long) build_docker/./src/Common/PODArray.h:341:15
#6 0x5581d413c59c in DB::ColumnVector<unsigned long>::ColumnVector(unsigned long) build_docker/./src/Columns/ColumnVector.h:137:45
#7 0x5581d413c59c in COW<DB::IColumn>::mutable_ptr<DB::ColumnVector<unsigned long>> COWHelper<DB::ColumnVectorHelper, DB::ColumnVector<unsigned long>>::create<unsigned long const&>(unsigned long const&) build_docker/./src/Common/COW.h:284:71
#8 0x5581d413c59c in DB::ColumnVector<unsigned long>::replicate(DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul> const&) const build_docker/./src/Columns/ColumnVector.cpp:840:16
#9 0x5581d3c06124 in DB::ColumnConst::convertToFullColumn() const build_docker/./src/Columns/ColumnConst.cpp:48:18
#10 0x5581d3c09a4d in DB::ColumnConst::convertToFullColumnIfConst() const build_docker/./src/Columns/ColumnConst.h:39:16
#11 0x5581ce9c4664 in DB::materializeBlock(DB::Block const&) build_docker/./src/Core/Block.cpp:813:64
#12 0x5581d047e677 in DB::Aggregator::Params::getHeader(DB::Block const&, bool, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::vector<DB::AggregateDescription, std::__1::allocator<DB::AggregateDescription>> const&, bool) build_docker/./src/Interpreters/Aggregator.cpp:483:12
#13 0x5581d7b7c133 in DB::Aggregator::Params::getHeader(DB::Block const&, bool) const build_docker/./src/Interpreters/Aggregator.h:1094:75
#14 0x5581d7b7c133 in DB::AggregatingStep::AggregatingStep(DB::DataStream const&, DB::Aggregator::Params, std::__1::vector<DB::GroupingSetsParams, std::__1::allocator<DB::GroupingSetsParams>>, bool, unsigned long, unsigned long, unsigned long, unsigned long, bool, bool, DB::SortDescription, DB::SortDescription, bool, bool, bool) build_docker/./src/Processors/QueryPlan/AggregatingStep.cpp:113:38
#15 0x5581d2a7e736 in std::__1::__unique_if<DB::AggregatingStep>::__unique_single std::__1::make_unique[abi:v15000]<DB::AggregatingStep, DB::DataStream const&, DB::Aggregator::Params, std::__1::vector<DB::GroupingSetsParams, std::__1::allocator<DB::GroupingSetsParams>>, bool&, DB::SettingFieldNumber<unsigned long> const&, DB::SettingFieldNumber<unsigned long> const&, unsigned long&, unsigned long&, bool&, DB::SettingFieldNumber<bool> const&, DB::SortDescription, DB::SortDescription, bool const&, DB::SettingFieldNumber<bool> const&, bool>(DB::DataStream const&, DB::Aggregator::Params&&, std::__1::vector<DB::GroupingSetsParams, std::__1::allocator<DB::GroupingSetsParams>>&&, bool&, DB::SettingFieldNumber<unsigned long> const&, DB::SettingFieldNumber<unsigned long> const&, unsigned long&, unsigned long&, bool&, DB::SettingFieldNumber<bool> const&, DB::SortDescription&&, DB::SortDescription&&, bool const&, DB::SettingFieldNumber<bool> const&, bool&&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#16 0x5581d2a57591 in DB::InterpreterSelectQuery::executeAggregation(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, bool, bool, std::__1::shared_ptr<DB::InputOrderInfo const>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2686:29
#17 0x5581d2a36db9 in DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1717:17
#18 0x5581d2a304cc in DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:899:5
#19 0x5581d2c6a9ba in DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:303:38
#20 0x5581d2c6d2bc in DB::InterpreterSelectWithUnionQuery::execute() build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:377:5
#21 0x5581d381fc31 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:746:40
#22 0x5581d3813aa1 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1168:30
#23 0x5581d6cf88d7 in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:421:24
#24 0x5581d6d37c5e in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2057:9
SUMMARY: MemorySanitizer: use-of-uninitialized-value build_docker/./src/Columns/ColumnVector.cpp:440:9 in DB::ColumnVector<unsigned long>::get64(unsigned long) const
``` | https://github.com/ClickHouse/ClickHouse/issues/51648 | https://github.com/ClickHouse/ClickHouse/pull/51804 | 82a80cf694b01849ade4c8412d6b0238fcf7f639 | d7782f6518bc88cdbe87716ac95f45f96e381882 | "2023-06-30T07:44:33Z" | c++ | "2023-07-06T00:47:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,588 | ["src/Functions/FunctionBinaryArithmetic.h", "tests/queries/0_stateless/02531_ipv4_arithmetic.reference", "tests/queries/0_stateless/02531_ipv4_arithmetic.sql"] | Logical error: 'Arguments of 'intDiv' have incorrect data types: 'number' of type 'UInt64', 'toIPv4OrNull('0')' of type 'IPv4''. | ```
SELECT sum(u) FROM (SELECT intDiv(number, toIPv4OrNull('0')) AS k, uniqCombined(reinterpretAsString(number % NULL)) AS u FROM numbers(65535 * 10) GROUP BY k) WHERE toIPv4OrNull(NULL) WITH TOTALS
```
https://s3.amazonaws.com/clickhouse-test-reports/51585/85d621dc389fc53e610b95d16d7d69cb0a94c4a2/fuzzer_astfuzzerubsan/report.html
```
2023.06.29 13:51:54.328922 [ 150 ] {1031a32c-a8c9-4a44-a047-6c4866b9741a} <Fatal> : Logical error: 'Arguments of 'intDiv' have incorrect data types: 'number' of type 'UInt64', 'toIPv4OrNull('0')' of type 'IPv4''.
2023.06.29 13:51:54.329251 [ 658 ] {} <Fatal> BaseDaemon: ########################################
2023.06.29 13:51:54.329293 [ 658 ] {} <Fatal> BaseDaemon: (version 23.6.1.1, build id: 938311B7C4FD0C470F830E91742CD217C99B301C, git hash: 98e90d9a1da0ee728526a65851849a4a94f0c93a) (from thread 150) (query_id: 1031a32c-a8c9-4a44-a047-6c4866b9741a) (query: SELECT sum(u) FROM (SELECT intDiv(number, toIPv4OrNull('0')) AS k, uniqCombined(reinterpretAsString(number % NULL)) AS u FROM numbers(65535 * 10) GROUP BY k) WHERE toIPv4OrNull(NULL) WITH TOTALS) Received signal Aborted (6)
2023.06.29 13:51:54.329315 [ 658 ] {} <Fatal> BaseDaemon:
2023.06.29 13:51:54.329331 [ 658 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fa05900f00b 0x00007fa058fee859 0x0000556693801e48 0x00005566938027ae 0x0000556688c1bc43 0x000055668c23ff98 0x000055668c24010c 0x000055668c23d38e 0x000055668c23a871 0x000055668715495f 0x0000556687153501 0x000055669f9b2cf9 0x000055669f9b492b 0x000055669f9b5da6 0x00005566a05caedf 0x00005566a3bcb6d3 0x00005566a3ed2cdc 0x00005566a17930f8 0x00005566a178033a 0x00005566a177c868 0x00005566a187fdd8 0x00005566a178c217 0x00005566a177ed0e 0x00005566a177c868 0x00005566a187fdd8 0x00005566a188155f 0x00005566a1ddfd56 0x00005566a1dd961c 0x00005566a35b8b93 0x00005566a35e48b2 0x00005566a4da761e 0x00005566a4da875a 0x00005566a503d4b0 0x00005566a50385d1 0x00007fa0591c6609 0x00007fa0590eb133
2023.06.29 13:51:54.329364 [ 658 ] {} <Fatal> BaseDaemon: 3. gsignal @ 0x00007fa05900f00b in ?
2023.06.29 13:51:54.329379 [ 658 ] {} <Fatal> BaseDaemon: 4. abort @ 0x00007fa058fee859 in ?
2023.06.29 13:51:54.353893 [ 658 ] {} <Fatal> BaseDaemon: 5. ./build_docker/./src/Common/Exception.cpp:49: DB::abortOnFailedAssertion(String const&) @ 0x000000003798be48 in /workspace/clickhouse
2023.06.29 13:51:54.377516 [ 658 ] {} <Fatal> BaseDaemon: 6. ./build_docker/./src/Common/Exception.cpp:93: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000003798c7ae in /workspace/clickhouse
2023.06.29 13:51:55.752406 [ 658 ] {} <Fatal> BaseDaemon: 7. DB::Exception::Exception<String, String const&, String, String const&, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String const&>::type, std::type_identity<String>::type, std::type_identity<String const&>::type, std::type_identity<String>::type>, String&&, String const&, String&&, String const&, String&&) @ 0x000000002cda5c43 in /workspace/clickhouse
2023.06.29 13:51:57.083602 [ 658 ] {} <Fatal> BaseDaemon: 8. DB::FunctionBinaryArithmetic<DB::DivideIntegralImpl, DB::NameIntDiv, false, true, true>::executeImpl2(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) const @ 0x00000000303c9f98 in /workspace/clickhouse
2023.06.29 13:51:58.414855 [ 658 ] {} <Fatal> BaseDaemon: 9. DB::FunctionBinaryArithmetic<DB::DivideIntegralImpl, DB::NameIntDiv, false, true, true>::executeImpl2(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) const @ 0x00000000303ca10c in /workspace/clickhouse
2023.06.29 13:51:59.746468 [ 658 ] {} <Fatal> BaseDaemon: 10. DB::FunctionBinaryArithmetic<DB::DivideIntegralImpl, DB::NameIntDiv, false, true, true>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000303c738e in /workspace/clickhouse
2023.06.29 13:52:01.078353 [ 658 ] {} <Fatal> BaseDaemon: 11. DB::FunctionBinaryArithmeticWithConstants<DB::DivideIntegralImpl, DB::NameIntDiv, false, true, true>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000303c4871 in /workspace/clickhouse
2023.06.29 13:52:02.416992 [ 658 ] {} <Fatal> BaseDaemon: 12. DB::IFunction::executeImplDryRun(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000002b2de95f in /workspace/clickhouse
2023.06.29 13:52:03.750315 [ 658 ] {} <Fatal> BaseDaemon: 13. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000002b2dd501 in /workspace/clickhouse
2023.06.29 13:52:03.765028 [ 658 ] {} <Fatal> BaseDaemon: 14. ./build_docker/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000043b3ccf9 in /workspace/clickhouse
2023.06.29 13:52:03.780457 [ 658 ] {} <Fatal> BaseDaemon: 15.1. inlined from ./build_docker/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: intrusive_ptr
2023.06.29 13:52:03.780493 [ 658 ] {} <Fatal> BaseDaemon: 15.2. inlined from ./build_docker/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:122: boost::intrusive_ptr<DB::IColumn const>::operator=(boost::intrusive_ptr<DB::IColumn const>&&)
2023.06.29 13:52:03.780509 [ 658 ] {} <Fatal> BaseDaemon: 15.3. inlined from ./build_docker/./src/Common/COW.h:136: COW<DB::IColumn>::immutable_ptr<DB::IColumn>::operator=(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&&)
2023.06.29 13:52:03.780523 [ 658 ] {} <Fatal> BaseDaemon: 15. ./build_docker/./src/Functions/IFunction.cpp:302: DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000043b3e92b in /workspace/clickhouse
2023.06.29 13:52:03.796563 [ 658 ] {} <Fatal> BaseDaemon: 16. ./build_docker/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000043b3fda6 in /workspace/clickhouse
2023.06.29 13:52:03.998780 [ 658 ] {} <Fatal> BaseDaemon: 17.1. inlined from ./build_docker/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: intrusive_ptr
2023.06.29 13:52:03.998838 [ 658 ] {} <Fatal> BaseDaemon: 17.2. inlined from ./build_docker/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:122: boost::intrusive_ptr<DB::IColumn const>::operator=(boost::intrusive_ptr<DB::IColumn const>&&)
2023.06.29 13:52:03.998855 [ 658 ] {} <Fatal> BaseDaemon: 17.3. inlined from ./build_docker/./src/Common/COW.h:136: COW<DB::IColumn>::immutable_ptr<DB::IColumn>::operator=(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&&)
2023.06.29 13:52:03.998872 [ 658 ] {} <Fatal> BaseDaemon: 17.4. inlined from ./build_docker/./src/Interpreters/ActionsDAG.cpp:537: DB::executeActionForHeader(DB::ActionsDAG::Node const*, std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>>)
2023.06.29 13:52:03.998885 [ 658 ] {} <Fatal> BaseDaemon: 17. ./build_docker/./src/Interpreters/ActionsDAG.cpp:656: DB::ActionsDAG::updateHeader(DB::Block) const @ 0x0000000044754edf in /workspace/clickhouse
2023.06.29 13:52:04.010759 [ 658 ] {} <Fatal> BaseDaemon: 18.1. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/unordered_map:1153: ~unordered_map
2023.06.29 13:52:04.010800 [ 658 ] {} <Fatal> BaseDaemon: 18.2. inlined from ./build_docker/./src/Core/Block.h:25: ~Block
2023.06.29 13:52:04.010813 [ 658 ] {} <Fatal> BaseDaemon: 18. ./build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:8: DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) @ 0x0000000047d556d3 in /workspace/clickhouse
2023.06.29 13:52:04.033280 [ 658 ] {} <Fatal> BaseDaemon: 19.1. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:815: std::shared_ptr<DB::ActionsDAG>::operator->[abi:v15000]() const
2023.06.29 13:52:04.033330 [ 658 ] {} <Fatal> BaseDaemon: 19.2. inlined from ./build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:20: DB::getTraits(std::shared_ptr<DB::ActionsDAG> const&, DB::Block const&, DB::SortDescription const&)
2023.06.29 13:52:04.033344 [ 658 ] {} <Fatal> BaseDaemon: 19. ./build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:32: DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&) @ 0x000000004805ccdc in /workspace/clickhouse
2023.06.29 13:52:04.173504 [ 658 ] {} <Fatal> BaseDaemon: 20. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2633: DB::InterpreterSelectQuery::executeAggregation(DB::QueryPlan&, std::shared_ptr<DB::ActionsDAG> const&, bool, bool, std::shared_ptr<DB::InputOrderInfo const>) @ 0x000000004591d0f8 in /workspace/clickhouse
2023.06.29 13:52:04.310572 [ 658 ] {} <Fatal> BaseDaemon: 21. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x000000004590a33a in /workspace/clickhouse
2023.06.29 13:52:04.445562 [ 658 ] {} <Fatal> BaseDaemon: 22. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000045906868 in /workspace/clickhouse
2023.06.29 13:52:04.488943 [ 658 ] {} <Fatal> BaseDaemon: 23. ./build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000045a09dd8 in /workspace/clickhouse
2023.06.29 13:52:04.610279 [ 658 ] {} <Fatal> BaseDaemon: 24. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x0000000045916217 in /workspace/clickhouse
2023.06.29 13:52:04.748414 [ 658 ] {} <Fatal> BaseDaemon: 25. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1475: DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x0000000045908d0e in /workspace/clickhouse
2023.06.29 13:52:04.884247 [ 658 ] {} <Fatal> BaseDaemon: 26. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000045906868 in /workspace/clickhouse
2023.06.29 13:52:04.927495 [ 658 ] {} <Fatal> BaseDaemon: 27. ./build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000045a09dd8 in /workspace/clickhouse
2023.06.29 13:52:04.971409 [ 658 ] {} <Fatal> BaseDaemon: 28. ./build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::execute() @ 0x0000000045a0b55f in /workspace/clickhouse
2023.06.29 13:52:05.041962 [ 658 ] {} <Fatal> BaseDaemon: 29. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000045f69d56 in /workspace/clickhouse
2023.06.29 13:52:05.117563 [ 658 ] {} <Fatal> BaseDaemon: 30. ./build_docker/./src/Interpreters/executeQuery.cpp:1168: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x0000000045f6361c in /workspace/clickhouse
2023.06.29 13:52:05.180618 [ 658 ] {} <Fatal> BaseDaemon: 31. ./build_docker/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x0000000047742b93 in /workspace/clickhouse
2023.06.29 13:52:05.272387 [ 658 ] {} <Fatal> BaseDaemon: 32. ./build_docker/./src/Server/TCPHandler.cpp:2059: DB::TCPHandler::run() @ 0x000000004776e8b2 in /workspace/clickhouse
2023.06.29 13:52:05.277938 [ 658 ] {} <Fatal> BaseDaemon: 33. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x0000000048f3161e in /workspace/clickhouse
2023.06.29 13:52:05.285361 [ 658 ] {} <Fatal> BaseDaemon: 34.1. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: std::unique_ptr<Poco::Net::TCPServerConnection, std::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
2023.06.29 13:52:05.285399 [ 658 ] {} <Fatal> BaseDaemon: 34.2. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
2023.06.29 13:52:05.285411 [ 658 ] {} <Fatal> BaseDaemon: 34. ./build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x0000000048f3275a in /workspace/clickhouse
2023.06.29 13:52:05.293171 [ 658 ] {} <Fatal> BaseDaemon: 35. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x00000000491c74b0 in /workspace/clickhouse
2023.06.29 13:52:05.300409 [ 658 ] {} <Fatal> BaseDaemon: 36.1. inlined from ./build_docker/./base/poco/Foundation/include/Poco/AutoPtr.h:205: Poco::AutoPtr<Poco::ThreadImpl::ThreadData>::operator->()
2023.06.29 13:52:05.300437 [ 658 ] {} <Fatal> BaseDaemon: 36. ./build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:350: Poco::ThreadImpl::runnableEntry(void*) @ 0x00000000491c25d1 in /workspace/clickhouse
2023.06.29 13:52:05.300452 [ 658 ] {} <Fatal> BaseDaemon: 37. ? @ 0x00007fa0591c6609 in ?
2023.06.29 13:52:05.300466 [ 658 ] {} <Fatal> BaseDaemon: 38. clone @ 0x00007fa0590eb133 in ?
2023.06.29 13:52:05.300482 [ 658 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2023.06.29 13:52:16.249419 [ 658 ] {} <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
2023.06.29 13:52:16.249578 [ 658 ] {} <Fatal> BaseDaemon: Changed settings: max_block_size = 10000, min_insert_block_size_rows = 1000000, min_insert_block_size_bytes = 1000000, max_threads = 16, receive_timeout = 10., receive_data_timeout_ms = 10000, extremes = true, use_uncompressed_cache = false, optimize_move_to_prewhere = true, optimize_move_to_prewhere_if_final = false, totals_mode = 'after_having_auto', allow_suspicious_low_cardinality_types = true, compile_expressions = true, group_by_two_level_threshold = 1, group_by_two_level_threshold_bytes = 100000000, enable_positional_arguments = false, force_primary_key = true, log_queries = true, distributed_product_mode = 'local', table_function_remote_max_addresses = 200, join_use_nulls = false, single_join_prefer_left_table = false, insert_distributed_sync = true, prefer_global_in_and_join = true, max_rows_to_read = 8192, max_rows_to_group_by = 100000, group_by_overflow_mode = 'any', max_bytes_before_external_group_by = 1000000, max_execution_time = 10., max_rows_in_join = 10, join_algorithm = 'auto', max_memory_usage = 9830400, send_logs_level = 'fatal', decimal_check_overflow = false, prefer_localhost_replica = false, allow_introspection_functions = true, max_partitions_per_insert_block = 100, mutations_sync = 2, convert_query_to_cnf = false, cast_keep_nullable = false, cast_ipv4_ipv6_default_on_conversion_error = false, optimize_trivial_insert_select = false, legacy_column_name_of_tuple_literal = true, local_filesystem_read_method = 'pread', remote_filesystem_read_method = 'read', load_marks_asynchronously = false, allow_deprecated_syntax_for_merge_tree = true, allow_asynchronous_read_from_io_pool_for_merge_tree = false, allow_experimental_nlp_functions = true, partial_merge_join_optimizations = 1, input_format_json_read_numbers_as_strings = true, input_format_ipv4_default_on_conversion_error = false, input_format_ipv6_default_on_conversion_error = false
2023.06.29 13:53:02.502600 [ 141 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | https://github.com/ClickHouse/ClickHouse/issues/51588 | https://github.com/ClickHouse/ClickHouse/pull/51642 | 3ce8dc5503904f18c288cb447f08f469fde8d14f | 61d05aeca4dc49e46aa92a5ed0e0b4f705cb21d3 | "2023-06-29T11:05:57Z" | c++ | "2023-07-28T12:27:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,581 | ["docs/en/sql-reference/functions/array-functions.md", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | `select hasAll([1], ['a'])` raises an exception rather than returns 0 as documented | **Describe what's wrong**
Executing `select hasAll([1], ['a'])` would raise an exception.
> SELECT hasAll([1], ['a'])
Query id: c809db5e-e770-4eb3-a73e-68a7561acd47
0 rows in set. Elapsed: 0.442 sec.
Received exception:
Code: 386. DB::Exception: There is no supertype for types UInt8, String because some of them are String/FixedString and some of them are not: While processing hasAll([1], ['a']). (NO_COMMON_TYPE)
However, the [document](https://clickhouse.com/docs/en/sql-reference/functions/array-functions#hasall) says it would returns 0. | https://github.com/ClickHouse/ClickHouse/issues/51581 | https://github.com/ClickHouse/ClickHouse/pull/51634 | 6fe8a7389dc303f429e88835c7e4e97518aa3c36 | c1bef56c551b0d9cbf3d8d58953717205cfdc37d | "2023-06-29T06:48:13Z" | c++ | "2023-06-29T19:38:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,574 | ["src/Common/SystemLogBase.cpp", "src/Common/SystemLogBase.h", "src/Daemon/BaseDaemon.cpp", "src/Interpreters/CrashLog.cpp", "tests/integration/test_crash_log/__init__.py", "tests/integration/test_crash_log/test.py"] | system.crash_log is not created sometimes. | Table system.crash_log is not created after a force crash.
**Version**
```bash
./build/programs/clickhouse-client --version
ClickHouse client version 23.6.1.1.
```
**Does it reproduce on recent release?**
Yes.
**How to reproduce**
First, you must delete a table that has already been created, if it exists.
1) Start the ClickHouse server.
2) Drop table system.crash_logΒ
3) Stop the ClickHouse server.
Now reproduce the problem:
1) Start ClickHouse server.
2) Promt bash command:
```bash
sudo kill -4 $(pgrep clickhouse)
```
If table was successfully created repeat all steps again (drop system.crash_log, restart ClickHouse and kill the process)
**Expected behavior**
Table files are created, and a crash log can be selected by SQL command after server restart:Β
```SQL
SELECT * FROM system.crash_log
```
**Actual behavior**
The problem reproduces unstably. Possible variants:
1) No table files are created. (Table data can't be selected.)
```bash
ls Β /home/admin/ClickHouse2/ClickHouse/build/programs/data/system/crash_log
ls: cannot access '/home/admin/ClickHouse2/ClickHouse/build/programs/data/system/crash_log': No such file or directory
```
2) A table is created, but SELECT * FROM system.crash_log returns 0 entries:
```bash
tree /home/admin/ClickHouse2/ClickHouse/build/programs/data/system/crash_log
/home/admin/ClickHouse2/ClickHouse/build/programs/data/system/crash_log
βββ detached
βββ format_version.txt
```
3) Crash_log was successfully created, and table data can be selected (No error here). | https://github.com/ClickHouse/ClickHouse/issues/51574 | https://github.com/ClickHouse/ClickHouse/pull/51720 | 9df928eb132dcd99e446f9dc7897ea613f7c64a7 | f3684f78b7fe79b5474857a0fbe39d494f4fe995 | "2023-06-29T00:16:40Z" | c++ | "2023-07-13T08:49:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,532 | ["src/Client/Suggest.cpp"] | [Master] CLI suggestions are broken when connecting to older ClickHouse releases | AFAIK it only affects master (unreleased)
I'm not sure how common of a scenario this is (it is in my local setup), but if you use ClickHouse master (23.6) client to connect to an old release (like 22.8) the suggestions won't work because the query used to retrieve them is invalid.
```
$ ch_client_prod-02
ClickHouse client version 23.6.1.1.
Connecting to clickhouse-02:59000 as user default.
Connected to ClickHouse server version 22.8.11 revision 54460.
ClickHouse server version is older than ClickHouse client. It may indicate that the server is out of date and can be upgraded.
Warnings:
* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled
Cannot load data for command line suggestions: Code: 115. DB::Exception: Received from clickhouse-02:59000. DB::Exception: Unknown setting allow_experimental_analyzer: Maybe you meant ['allow_experimental_map_type','allow_experimental_geo_types']. (UNKNOWN_SETTING) (version 23.6.1.1)
production-02 :) Select version();
SELECT version()
Query id: aed1b84d-99d7-41ab-81b0-ab0380ec5a2c
ββversion()βββ
β 22.8.11.15 β
ββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
It seems this was introduced in https://github.com/ClickHouse/ClickHouse/pull/50605 because of https://github.com/ClickHouse/ClickHouse/issues/50669. Since this in hardcoded in the CLI you can't try to change/disable the behaviour in any way.
cc @hanfei1991 | https://github.com/ClickHouse/ClickHouse/issues/51532 | https://github.com/ClickHouse/ClickHouse/pull/51578 | 2ce7bcaa3d5fb36a11ae0211eabd5a89c2a8c5de | 76316f9cc8e9f78515181a839dfbf64cd20274ff | "2023-06-28T12:06:06Z" | c++ | "2023-07-05T11:49:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,525 | ["docs/en/sql-reference/functions/tuple-functions.md", "src/Common/assert_cast.h", "src/Functions/tupleElement.cpp", "tests/queries/0_stateless/02116_tuple_element.sql", "tests/queries/0_stateless/02286_tuple_numeric_identifier.sql", "tests/queries/0_stateless/02354_tuple_element_with_default.reference", "tests/queries/0_stateless/02354_tuple_element_with_default.sql"] | Fuzzer: Logical error in tupleElement() | `SELECT tupleElement([[(count('2147483646'), 1)]], 'aaaa', [[1, 2, 3]])`
--> `Logical error: 'Bad cast from type DB::IColumn const* to DB::ColumnArray const*'.`
https://s3.amazonaws.com/clickhouse-test-reports/0/abf56b80a9303b0ffea4996753d7c670acfcdf61/fuzzer_astfuzzerdebug/report.html | https://github.com/ClickHouse/ClickHouse/issues/51525 | https://github.com/ClickHouse/ClickHouse/pull/51534 | bd88a2195a2f867deb5dbf673be550f56d0239d3 | 9930fd4756f5b9b9fad4f8cc66d47e5331f817b8 | "2023-06-28T11:19:30Z" | c++ | "2023-06-29T04:50:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,516 | ["src/Disks/StoragePolicy.cpp"] | The error message of modify the storage policy is incorrect | (you don't have to strictly follow this form)
**Describe the unexpected behaviour**
**How to reproduce**
- ClickHouse 23.6.1.1
- master branch
```
--the config of storage_policy
<clickhouse>
<storage_configuration>
<disks>
<disk_01>
<path>/mnt/data1/ckdata/</path>
</disk_01>
<disk_02>
<path>/mnt/data2/ckdata/</path>
</disk_02>
</disks>
<policies>
<jbod_policies>
<volumes>
<jbod_volume>
<disk>disk_01</disk>
</jbod_volume>
</volumes>
<move_factor>0.2</move_factor>
</jbod_policies>
<jbod_policies_new>
<volumes>
<jbod_volume_new>
<disk>disk_02</disk>
</jbod_volume_new>
</volumes>
<move_factor>0.2</move_factor>
</jbod_policies_new>
</policies>
</storage_configuration>
</clickhouse>
-- create table sql
CREATE TABLE default.storage_policy_test
(
`name` String,
`age` String
)
ENGINE = MergeTree
ORDER BY name
SETTINGS storage_policy = 'jbod_policies', index_granularity = 8192
-- modify storage policy sql
ALTER TABLE default.storage_policy_test MODIFY SETTING storage_policy = 'jbod_policies_new'
-- error message
Received exception from server (version 23.6.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: New storage policy `jbod_policies` shall contain disks of old one. (BAD_ARGUMENTS)
```
the error message of modify storage policy is incorrect οΌthe message of 'New storage policy' should be jbod_policies_new
* Which ClickHouse server version to use
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [[clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80)](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
**Expected behavior**
A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. | https://github.com/ClickHouse/ClickHouse/issues/51516 | https://github.com/ClickHouse/ClickHouse/pull/51854 | fa9514f21612e60c0939b7bfd1b5f4f1f1f410c8 | 8da8b79cc4b17aa9e2e645a3c825e752c7d0467c | "2023-06-28T09:15:18Z" | c++ | "2023-07-07T01:58:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,510 | ["src/Interpreters/ActionsVisitor.cpp", "tests/analyzer_tech_debt.txt", "tests/queries/0_stateless/02818_parameterized_view_with_cte_multiple_usage.reference", "tests/queries/0_stateless/02818_parameterized_view_with_cte_multiple_usage.sql"] | Parametrized view does not work with many usage of one parameter | Clickhouse v.23.3.5.9
Working example
```
create view default.test_param_view as
with {param_test_val:UInt8} as param_test_val
select param_test_val,
arrayCount((a)->(a < param_test_val), t.arr) as cnt1
from (select [1,2,3,4,5] as arr) t;
```
Not working example
```
create view default.test_param_view as
with {param_test_val:UInt8} as param_test_val
select param_test_val,
arrayCount((a)->(a < param_test_val), t.arr) as cnt1,
arrayCount((a)->(a < param_test_val+1), t.arr) as cnt2
from (select [1,2,3,4,5] as arr) t;
```
Got error message
```
DB::Exception: Column 'param_test_val' already exists: While processing {param_test_val:UInt8} AS param_test_val, arrayCount(a -> (a < param_test_val), arr) AS cnt1, arrayCount(a -> (a < (param_test_val + 1)), arr) AS cnt2. Stack trace:
```
But it will be work, if i will use `identity` function, example:
```
create view default.test_param_view as
with {param_test_val:UInt8} as param_test_val
select identity(param_test_val) as new_param_test_val,
arrayCount((a)->(a < new_param_test_val), t.arr) as cnt1,
arrayCount((a)->(a < new_param_test_val+1), t.arr) as cnt2
from (select [1,2,3,4,5] as arr) t;
``` | https://github.com/ClickHouse/ClickHouse/issues/51510 | https://github.com/ClickHouse/ClickHouse/pull/52328 | b6bcc32acb16af84d40ba6b7fb466e239eb977a4 | 5db88e677b982e1d1417ccc1196696a31fc43341 | "2023-06-28T07:41:45Z" | c++ | "2023-07-26T22:06:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,486 | ["src/IO/VarInt.cpp", "src/IO/VarInt.h", "src/Server/TCPHandler.cpp", "tests/queries/0_stateless/02812_large_varints.reference", "tests/queries/0_stateless/02812_large_varints.sql"] | Catched `Logical error: 'Distributed task iterator is not initialized'.` | **Describe the bug**
A link to the report https://s3.amazonaws.com/clickhouse-test-reports/51478/5fdb19672bc443222560df00e0b76add05a1ad10/fuzzer_astfuzzerasan/report.html
**How to reproduce**
The query in the report is `SELECT number % NULL, NULL, sum(number) AS val FROM remote('127.0.0.{2,3}', numbers(toUInt64(-1))) GROUP BY number, number % 1023 WITH CUBE ORDER BY toUInt16(10) DESC NULLS LAST SETTINGS group_by_use_nulls = 1`
| https://github.com/ClickHouse/ClickHouse/issues/51486 | https://github.com/ClickHouse/ClickHouse/pull/51905 | f8eaa3e8e6791d5aaa5bddb947efde4da23fd3d7 | 1eaefd031892cc9e16e51c6d67445303b59b570b | "2023-06-27T14:27:44Z" | c++ | "2023-07-07T19:33:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,400 | ["docs/en/operations/system-tables/parts.md", "src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/System/StorageSystemParts.cpp", "tests/queries/0_stateless/02117_show_create_table_system.reference"] | Add column `primary_key_bytes` to the `system.parts` table, telling the size of the primary.idx/cidx file on disk. | null | https://github.com/ClickHouse/ClickHouse/issues/51400 | https://github.com/ClickHouse/ClickHouse/pull/51496 | 44929b7d2873704c4df4bc54ddfbd4c36ef322de | 381ab07e1ba8454346c7e99dc0504941802badf0 | "2023-06-26T06:01:47Z" | c++ | "2023-07-17T08:39:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,322 | ["base/base/bit_cast.h", "src/Functions/transform.cpp", "tests/queries/0_stateless/02797_transform_narrow_types.reference", "tests/queries/0_stateless/02797_transform_narrow_types.sql"] | Wrong result of function `transform` | **Describe what's wrong**
```sql
SELECT transform(-1, [-1, 2], ['f', 's'], 'g')
```
```
ββtransform(-1, [-1, 2], ['f', 's'], 'g')ββ
β g β
βββββββββββββββββββββββββββββββββββββββββββ
```
The expected result is `f`.
**Additional context**
Those cases work correctly:
```sql
SELECT transform(2, [-1, 2], ['f', 's'], 'g')
```
```
ββtransform(2, [-1, 2], ['f', 's'], 'g')ββ
β s β
ββββββββββββββββββββββββββββββββββββββββββ
```
```sql
SELECT transform(-1::Int64, [-1, 2]::Array(Int64), ['f', 's'], 'g') AS res
```
```
ββresββ
β f β
βββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/51322 | https://github.com/ClickHouse/ClickHouse/pull/51350 | 786987a29a67bb8e26685b1856bf52a20c142ef1 | 396eb704269b7d67cfbf654d8b111d826292ddc2 | "2023-06-23T12:23:26Z" | c++ | "2023-06-24T03:49:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,312 | ["src/Interpreters/HashJoin.cpp", "tests/queries/0_stateless/02771_semi_join_use_nulls.reference", "tests/queries/0_stateless/02771_semi_join_use_nulls.sql.j2"] | Logical error: Invalid number of rows in Chunk column Float32 (in `JoiningTransform`) | https://s3.amazonaws.com/clickhouse-test-reports/51284/ccb42d0afa202aba6b4a8459bae971afa87a67dd/fuzzer_astfuzzerdebug/report.html
```
2023.06.22 21:16:38.668932 [ 478 ] {39f31655-7a80-4128-b4fa-5cd019b085ab} <Fatal> : Logical error: 'Invalid number of rows in Chunk column Float32 position 2: expected 0, got 1'.
2023.06.22 21:16:38.669981 [ 483 ] {} <Fatal> BaseDaemon: ########################################
2023.06.22 21:16:38.670232 [ 483 ] {} <Fatal> BaseDaemon: (version 23.6.1.1, build id: 4092B432FE99A940C68A13B15B290DE709E03F3A, git hash: 91c6341d987f642fd0a246da9cbc804799764896) (from thread 478) (query_id: 39f31655-7a80-4128-b4fa-5cd019b085ab) (query: SELECT toUInt64(1.0001, NULL), toUInt64(2147483648), * FROM (SELECT corr(toUInt64(2147483646, NULL), id, id) AS corr_value FROM test_table__fuzz_1 GROUP BY value) AS subquery ANTI LEFT JOIN test_table ON subquery.corr_value = test_table.id WHERE (test_table.id >= test_table.id) AND (NOT (test_table.id >= test_table.id))) Received signal Aborted (6)
2023.06.22 21:16:38.670427 [ 483 ] {} <Fatal> BaseDaemon:
2023.06.22 21:16:38.670622 [ 483 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fd3270c000b 0x00007fd32709f859 0x000000002430807e 0x00000000243080f5 0x00000000243084ff 0x000000001ad77faa 0x000000001ae1d5e8 0x000000002eac0d0e 0x000000002eac124e 0x000000002ef94eed 0x000000002ef9477d 0x000000002eb0c883 0x000000002eb0c5c0 0x000000002eaf11e1 0x000000002eaf14f7 0x000000002eaf24d8 0x000000002eaf2435 0x000000002eaf2415 0x000000002eaf23f5 0x000000002eaf23c0 0x00000000243614b6 0x00000000243609b5 0x000000002446b0c3 0x00000000244749a4 0x0000000024474975 0x0000000024474959 0x00000000244748bd 0x00000000244747c0 0x0000000024474735 0x0000000024474715 0x00000000244746f5 0x00000000244746c0 0x00000000243614b6 0x00000000243609b5 0x0000000024467b83 0x000000002446eda4 0x000000002446ed55 0x000000002446ec7d 0x000000002446e762 0x00007fd327277609 0x00007fd32719c133
2023.06.22 21:16:38.670827 [ 483 ] {} <Fatal> BaseDaemon: 4. raise @ 0x00007fd3270c000b in ?
2023.06.22 21:16:38.670997 [ 483 ] {} <Fatal> BaseDaemon: 5. abort @ 0x00007fd32709f859 in ?
2023.06.22 21:16:38.773573 [ 483 ] {} <Fatal> BaseDaemon: 6. /build/src/Common/Exception.cpp:42: DB::abortOnFailedAssertion(String const&) @ 0x000000002430807e in /workspace/clickhouse
2023.06.22 21:16:38.875400 [ 483 ] {} <Fatal> BaseDaemon: 7. /build/src/Common/Exception.cpp:65: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x00000000243080f5 in /workspace/clickhouse
2023.06.22 21:16:38.964068 [ 483 ] {} <Fatal> BaseDaemon: 8. /build/src/Common/Exception.cpp:93: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x00000000243084ff in /workspace/clickhouse
2023.06.22 21:16:39.057906 [ 483 ] {} <Fatal> BaseDaemon: 9. /build/src/Common/Exception.h:54: DB::Exception::Exception(String&&, int, bool) @ 0x000000001ad77faa in /workspace/clickhouse
2023.06.22 21:16:40.126587 [ 483 ] {} <Fatal> BaseDaemon: 10. /build/src/Common/Exception.h:81: DB::Exception::Exception<String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&) @ 0x000000001ae1d5e8 in /workspace/clickhouse
2023.06.22 21:16:40.184511 [ 483 ] {} <Fatal> BaseDaemon: 11. /build/src/Processors/Chunk.cpp:73: DB::Chunk::checkNumRowsIsConsistent() @ 0x000000002eac0d0e in /workspace/clickhouse
2023.06.22 21:16:40.241180 [ 483 ] {} <Fatal> BaseDaemon: 12. /build/src/Processors/Chunk.cpp:58: DB::Chunk::setColumns(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long) @ 0x000000002eac124e in /workspace/clickhouse
2023.06.22 21:16:40.335138 [ 483 ] {} <Fatal> BaseDaemon: 13. /build/src/Processors/Transforms/JoiningTransform.cpp:194: DB::JoiningTransform::transform(DB::Chunk&) @ 0x000000002ef94eed in /workspace/clickhouse
2023.06.22 21:16:40.428583 [ 483 ] {} <Fatal> BaseDaemon: 14. /build/src/Processors/Transforms/JoiningTransform.cpp:125: DB::JoiningTransform::work() @ 0x000000002ef9477d in /workspace/clickhouse
2023.06.22 21:16:40.467736 [ 483 ] {} <Fatal> BaseDaemon: 15. /build/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x000000002eb0c883 in /workspace/clickhouse
2023.06.22 21:16:40.503948 [ 483 ] {} <Fatal> BaseDaemon: 16. /build/src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x000000002eb0c5c0 in /workspace/clickhouse
2023.06.22 21:16:40.608528 [ 483 ] {} <Fatal> BaseDaemon: 17. /build/src/Processors/Executors/PipelineExecutor.cpp:255: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000002eaf11e1 in /workspace/clickhouse
2023.06.22 21:16:40.712252 [ 483 ] {} <Fatal> BaseDaemon: 18. /build/src/Processors/Executors/PipelineExecutor.cpp:221: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x000000002eaf14f7 in /workspace/clickhouse
2023.06.22 21:16:40.808508 [ 483 ] {} <Fatal> BaseDaemon: 19. /build/src/Processors/Executors/PipelineExecutor.cpp:343: DB::PipelineExecutor::spawnThreads()::$_0::operator()() const @ 0x000000002eaf24d8 in /workspace/clickhouse
2023.06.22 21:16:40.928306 [ 483 ] {} <Fatal> BaseDaemon: 20. /build/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) @ 0x000000002eaf2435 in /workspace/clickhouse
2023.06.22 21:16:41.044775 [ 483 ] {} <Fatal> BaseDaemon: 21. /build/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) @ 0x000000002eaf2415 in /workspace/clickhouse
2023.06.22 21:16:41.161358 [ 483 ] {} <Fatal> BaseDaemon: 22. /build/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>::operator()[abi:v15000]() @ 0x000000002eaf23f5 in /workspace/clickhouse
2023.06.22 21:16:41.277854 [ 483 ] {} <Fatal> BaseDaemon: 23. /build/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000002eaf23c0 in /workspace/clickhouse
2023.06.22 21:16:41.334917 [ 483 ] {} <Fatal> BaseDaemon: 24. /build/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x00000000243614b6 in /workspace/clickhouse
2023.06.22 21:16:41.387899 [ 483 ] {} <Fatal> BaseDaemon: 25. /build/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x00000000243609b5 in /workspace/clickhouse
2023.06.22 21:16:41.458708 [ 483 ] {} <Fatal> BaseDaemon: 26. /build/src/Common/ThreadPool.cpp:416: ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000002446b0c3 in /workspace/clickhouse
2023.06.22 21:16:41.549631 [ 483 ] {} <Fatal> BaseDaemon: 27. /build/src/Common/ThreadPool.cpp:180: void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()::operator()() const @ 0x00000000244749a4 in /workspace/clickhouse
2023.06.22 21:16:41.642874 [ 483 ] {} <Fatal> BaseDaemon: 28. /build/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<void>()()) std::__invoke[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()&>(void&&) @ 0x0000000024474975 in /workspace/clickhouse
2023.06.22 21:16:41.736453 [ 483 ] {} <Fatal> BaseDaemon: 29. /build/contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__apply_tuple_impl[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()&, std::tuple<>&>(void&&, std::tuple<>&, std::__tuple_indices<>) @ 0x0000000024474959 in /workspace/clickhouse
2023.06.22 21:16:41.830270 [ 483 ] {} <Fatal> BaseDaemon: 30. /build/contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()&, std::tuple<>&>(void&&, std::tuple<>&) @ 0x00000000244748bd in /workspace/clickhouse
2023.06.22 21:16:41.902130 [ 483 ] {} <Fatal> BaseDaemon: 31. /build/src/Common/ThreadPool.h:229: ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()::operator()() @ 0x00000000244747c0 in /workspace/clickhouse
2023.06.22 21:16:41.995356 [ 483 ] {} <Fatal> BaseDaemon: 32. /build/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<void>()()) std::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&>(void&&) @ 0x0000000024474735 in /workspace/clickhouse
2023.06.22 21:16:42.087515 [ 483 ] {} <Fatal> BaseDaemon: 33. /build/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&) @ 0x0000000024474715 in /workspace/clickhouse
2023.06.22 21:16:42.177741 [ 483 ] {} <Fatal> BaseDaemon: 34. /build/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>::operator()[abi:v15000]() @ 0x00000000244746f5 in /workspace/clickhouse
2023.06.22 21:16:42.268176 [ 483 ] {} <Fatal> BaseDaemon: 35. /build/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000244746c0 in /workspace/clickhouse
2023.06.22 21:16:42.325143 [ 483 ] {} <Fatal> BaseDaemon: 36. /build/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x00000000243614b6 in /workspace/clickhouse
2023.06.22 21:16:42.378057 [ 483 ] {} <Fatal> BaseDaemon: 37. /build/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x00000000243609b5 in /workspace/clickhouse
2023.06.22 21:16:42.447796 [ 483 ] {} <Fatal> BaseDaemon: 38. /build/src/Common/ThreadPool.cpp:416: ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x0000000024467b83 in /workspace/clickhouse
2023.06.22 21:16:42.532552 [ 483 ] {} <Fatal> BaseDaemon: 39. /build/src/Common/ThreadPool.cpp:180: void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()::operator()() const @ 0x000000002446eda4 in /workspace/clickhouse
2023.06.22 21:16:42.623358 [ 483 ] {} <Fatal> BaseDaemon: 40. /build/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<void>()()) std::__invoke[abi:v15000]<void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&) @ 0x000000002446ed55 in /workspace/clickhouse
2023.06.22 21:16:42.714052 [ 483 ] {} <Fatal> BaseDaemon: 41. /build/contrib/llvm-project/libcxx/include/thread:285: void std::__thread_execute[abi:v15000]<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(std::tuple<void, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>&, std::__tuple_indices<>) @ 0x000000002446ec7d in /workspace/clickhouse
2023.06.22 21:16:42.804609 [ 483 ] {} <Fatal> BaseDaemon: 42. /build/contrib/llvm-project/libcxx/include/thread:295: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000002446e762 in /workspace/clickhouse
2023.06.22 21:16:42.804846 [ 483 ] {} <Fatal> BaseDaemon: 43. ? @ 0x00007fd327277609 in ?
2023.06.22 21:16:42.805015 [ 483 ] {} <Fatal> BaseDaemon: 44. clone @ 0x00007fd32719c133 in ?
2023.06.22 21:16:42.805181 [ 483 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2023.06.22 21:16:46.965134 [ 483 ] {} <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
2023.06.22 21:16:46.965526 [ 483 ] {} <Fatal> BaseDaemon: Changed settings: receive_timeout = 10., receive_data_timeout_ms = 10000, allow_suspicious_low_cardinality_types = true, compile_expressions = true, min_count_to_compile_expression = 0, log_queries = true, table_function_remote_max_addresses = 200, max_execution_time = 10., max_memory_usage = 10000000000, send_logs_level = 'fatal', allow_introspection_functions = true
2023.06.22 21:16:49.386147 [ 146 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
cc: @KochetovNicolai, @vdimir | https://github.com/ClickHouse/ClickHouse/issues/51312 | https://github.com/ClickHouse/ClickHouse/pull/51601 | b28fa1f0540fd5517f1cca3dc74a701892c957b5 | b75a83d32ddbcbde9ca7a03d9dde2e63cea4f261 | "2023-06-23T10:01:47Z" | c++ | "2023-07-05T23:21:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,257 | ["docs/en/operations/system-tables/events.md", "docs/en/operations/system-tables/metrics.md", "src/Storages/System/StorageSystemEvents.cpp", "src/Storages/System/StorageSystemEvents.h", "src/Storages/System/StorageSystemMetrics.cpp", "src/Storages/System/StorageSystemMetrics.h", "tests/queries/0_stateless/02117_show_create_table_system.reference"] | system.events and system.metrics tables should have column `name` as an alias to `event` and `metric` | null | https://github.com/ClickHouse/ClickHouse/issues/51257 | https://github.com/ClickHouse/ClickHouse/pull/52315 | 0d8f8521445aaca2f34abafc880dc8d035bcfae4 | 8daeeedc8f04b83a09f1608dbced4a2973c6c2db | "2023-06-21T19:16:08Z" | c++ | "2023-07-30T20:37:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,236 | ["src/DataTypes/FieldToDataType.cpp", "src/DataTypes/FieldToDataType.h", "tests/queries/0_stateless/02832_integer_type_inference.reference", "tests/queries/0_stateless/02832_integer_type_inference.sql"] | Integer type inference seems confusing | Integer type inference seems incorect incorect
```sql
select [-4741124612489978151, -3236599669630092879, 5607475129431807682]
```
thow this exception
```
Received exception from server (version 23.3.2):
Code: 386. DB::Exception: Received from localhost:9101. DB::Exception: There is no supertype for types UInt64, Int64 because some of them are signed integers and some are unsigned integers, but there is no signed integer type, that can exactly represent all required unsigned integer values: While processing transform(commit, [-4741124612489978151, -3236599669630092879, 5607475129431807682], range(toInt64(3))). (NO_COMMON_TYPE)
```
while in fact all value could fit in Int64.
exemple
```sql
SELECT [-4741124612489978151, -3236599669630092879, toInt64(5607475129431807682)]
ββarray(-4741124612489978151, -3236599669630092879, toInt64(5607475129431807682))ββ
β [-4741124612489978151,-3236599669630092879,5607475129431807682] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/51236 | https://github.com/ClickHouse/ClickHouse/pull/53003 | aea9ac7de28f092a6367749f5ee8d99c41453e86 | 5354af00aedd659e44bea6eb4fa9717ca32aa2b2 | "2023-06-21T13:22:48Z" | c++ | "2023-09-12T10:05:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,211 | ["src/Processors/QueryPlan/Optimizations/liftUpFunctions.cpp", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug.reference", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug.sql", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug_2.reference", "tests/queries/0_stateless/02789_functions_after_sorting_and_columns_with_same_names_bug_2.sql"] | DB::Exception: Block structure mismatch in (columns with identical name must have identical structure) stream: different types | > You have to provide the following information whenever possible.
We have a table and left join query.
```
CREATE TABLE test
(
`pt` String,
`count_distinct_exposure_uv` AggregateFunction(uniqHLL12, Int64)
)
ENGINE = AggregatingMergeTree
ORDER BY pt
```
```
SELECT *
FROM
(
SELECT m0.pt AS pt
,m0.`exposure_uv` AS exposure_uv
,round(m2.exposure_uv,4) AS exposure_uv_hb_last_value
,if(m2.exposure_uv IS NULL OR m2.exposure_uv = 0,NULL,round((m0.exposure_uv - m2.exposure_uv) * 1.0 / m2.exposure_uv,4)) AS exposure_uv_hb_diff_percent
,round(m1.exposure_uv,4) AS exposure_uv_tb_last_value
,if(m1.exposure_uv IS NULL OR m1.exposure_uv = 0,NULL,round((m0.exposure_uv - m1.exposure_uv) * 1.0 / m1.exposure_uv,4)) AS exposure_uv_tb_diff_percent
FROM
(
SELECT m0.pt AS pt
,`exposure_uv` AS `exposure_uv`
FROM
(
SELECT pt AS pt
,CASE WHEN COUNT(`exposure_uv`) > 0 THEN AVG(`exposure_uv`) ELSE 0 END AS `exposure_uv`
FROM
(
SELECT pt AS pt
,uniqHLL12Merge(count_distinct_exposure_uv) AS `exposure_uv`
FROM test
GROUP BY pt
) m
GROUP BY pt
) m0
) m0
LEFT JOIN
(
SELECT m0.pt AS pt
,`exposure_uv` AS `exposure_uv`
FROM
(
SELECT formatDateTime(addYears(parseDateTimeBestEffort(pt),1),'%Y%m%d') AS pt
,CASE WHEN COUNT(`exposure_uv`) > 0 THEN AVG(`exposure_uv`) ELSE 0 END AS `exposure_uv`
FROM
(
SELECT pt AS pt
,uniqHLL12Merge(count_distinct_exposure_uv) AS `exposure_uv`
FROM test
GROUP BY pt
) m
GROUP BY pt
) m0
) m1
ON m0.pt = m1.pt
LEFT JOIN
(
SELECT m0.pt AS pt
,`exposure_uv` AS `exposure_uv`
FROM
(
SELECT formatDateTime(addDays(toDate(parseDateTimeBestEffort(pt)),1),'%Y%m%d') AS pt
,CASE WHEN COUNT(`exposure_uv`) > 0 THEN AVG(`exposure_uv`) ELSE 0 END AS `exposure_uv`
FROM
(
SELECT pt AS pt
,uniqHLL12Merge(count_distinct_exposure_uv) AS `exposure_uv`
FROM test
GROUP BY pt
) m
GROUP BY pt
) m0
) m2
ON m0.pt = m2.pt
) c0
ORDER BY pt ASC
, exposure_uv DESC
settings join_use_nulls = 1
```
We received the exception:
```
Received exception from server (version 23.6.1):
Code: 352. DB::Exception: Received from localhost:9996. DB::Exception: Block structure mismatch in (columns with identical name must have identical structure) stream: different types:
exposure_uv Nullable(Float64) Nullable(size = 0, Float64(size = 0), UInt8(size = 0))
exposure_uv Float64 Float64(size = 0). (AMBIGUOUS_COLUMN_NAME)
```
| https://github.com/ClickHouse/ClickHouse/issues/51211 | https://github.com/ClickHouse/ClickHouse/pull/51481 | 008238be71340234586ca8ab93fe18845db91174 | 83e9ec117c04bb3db2ce05400857924e15cf0894 | "2023-06-21T01:26:49Z" | c++ | "2023-07-18T10:34:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,173 | ["src/Processors/QueryPlan/Optimizations/optimizeTree.cpp", "src/Processors/QueryPlan/Optimizations/projectionsCommon.cpp", "src/Processors/QueryPlan/Optimizations/projectionsCommon.h", "tests/queries/0_stateless/01710_normal_projection_with_query_plan_optimization.reference", "tests/queries/0_stateless/01710_normal_projection_with_query_plan_optimization.sql"] | ORDER BY projection is used when main table have better selectivity | **Describe the issue**
ORDER BY projection is used when main table have better selectivity
**How to reproduce**
```
CREATE TABLE _local.t0
(
`c1` Int64,
`c2` Int64,
`c3` Int64,
`c4` Int64,
PROJECTION p1
(
SELECT
c1,
c2,
c4
ORDER BY
c2,
c1
)
)
ENGINE = MergeTree
ORDER BY (c1, c2)
SETTINGS index_granularity = 8192
INSERT INTO t0 SELECT
90+(number % 10),
rand(1),
rand(2),
rand(3)
FROM numbers_mt(10000000);
SELECT
c1,
c4
FROM t0
WHERE (c1 = 99) AND ((c2 >= 0) AND (c2 <= 100000000))
SETTINGS allow_experimental_projection_optimization = 1
FORMAT `Null`
Read 237 568 rows, 5.44 MiB in 0.010382 sec., 22882681.56424581 rows/sec., 523.74 MiB/sec.
SELECT
c1,
c4
FROM t0
WHERE (c1 = 99) AND ((c2 >= 0) AND (c2 <= 100000000))
SETTINGS allow_experimental_projection_optimization = 0
FORMAT `Null`
```
**Error message and/or stacktrace**
**Additional context**
Related https://github.com/ClickHouse/ClickHouse/issues/49150
| https://github.com/ClickHouse/ClickHouse/issues/51173 | https://github.com/ClickHouse/ClickHouse/pull/52308 | a39ba00ec34bea1b53f062d9f7507b86c48e4d40 | 4b0be1e535e792648d85c6f42e07cf79e6e34156 | "2023-06-19T23:57:54Z" | c++ | "2023-07-20T16:25:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,137 | ["src/Access/ContextAccess.cpp", "src/Core/BaseSettings.h", "src/Daemon/BaseDaemon.cpp"] | Show changed settings in fatal error messages. | **Describe the issue**
If an error happened in the query context, we can extract and list all the changed settings. | https://github.com/ClickHouse/ClickHouse/issues/51137 | https://github.com/ClickHouse/ClickHouse/pull/51138 | f54333ba37c3c28d5ab2c03b2bafecfbb2c09c19 | f9558345e886876b9132d9c018e357f7fa9b22a3 | "2023-06-18T18:15:49Z" | c++ | "2023-06-19T12:50:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,125 | ["src/Storages/MergeTree/IMergeTreeDataPartInfoForReader.h", "src/Storages/MergeTree/IMergeTreeReader.cpp", "src/Storages/MergeTree/IMergeTreeReader.h", "src/Storages/MergeTree/LoadedMergeTreeDataPartInfoForReader.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeReaderCompact.cpp", "src/Storages/MergeTree/MergeTreeReaderWide.cpp", "src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp", "src/Storages/MergeTree/checkDataPart.cpp", "src/Storages/MergeTree/checkDataPart.h"] | Exception "while reading from part" does not tell you about the table name | **Use case**
`Code: 432. DB::Exception: Unknown codec family code: 60: (while reading column event_date): (while reading from part /var/lib/clickhouse/store/a07/a07660e7-b4aa-4520-a802-833765c28feb/202306_96661_96661_0/ from mark 0 with max_rows_to_read = 8): While executing MergeTreeThread. (UNKNOWN_CODEC) (version 23.5.1.34385 (official build))`
There is no table name in this exception. | https://github.com/ClickHouse/ClickHouse/issues/51125 | https://github.com/ClickHouse/ClickHouse/pull/51270 | cc3398159efe1db53a4043b8a157ad525ab5bed2 | ad7fe63339352335acc5412ad313c93c6e8652c7 | "2023-06-17T15:08:22Z" | c++ | "2023-06-23T13:01:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,090 | ["tests/queries/0_stateless/02789_jit_cannot_convert_column.reference", "tests/queries/0_stateless/02789_jit_cannot_convert_column.sql", "tests/queries/0_stateless/02790_jit_wrong_result.reference", "tests/queries/0_stateless/02790_jit_wrong_result.sql"] | JIT compiler issue / Cannot convert column because it is non constant in source stream but must be constant in result | https://fiddle.clickhouse.com/52b751b5-fa79-4865-bd9b-49aca042bc13
```sql
SELECT
sum(c),
toInt32((h - null::Nullable(DateTime)) / 3600) + 1 AS a
FROM
(
SELECT count() AS c, h
FROM ( SELECT now() AS h )
WHERE toInt32((h - null::Nullable(DateTime)) / 3600) + 1 = 1
GROUP BY h
)
GROUP BY a settings min_count_to_compile_expression=0;
DB::Exception: Cannot convert column
`plus(toInt32(divide(minus(h, CAST(NULL, 'Nullable(DateTime)')), 3600)), 1)`
because it is non constant in source stream but must be constant in result. (ILLEGAL_COLUMN)
```
WA: `compile_expressions=0` | https://github.com/ClickHouse/ClickHouse/issues/51090 | https://github.com/ClickHouse/ClickHouse/pull/51113 | db82e94e68c48dd01a2e91be597cbedc7b56a188 | e28dc5d61c924992ddd0066e0e5e5bb05b848db3 | "2023-06-16T16:00:40Z" | c++ | "2023-12-08T02:27:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,089 | ["programs/local/LocalServer.cpp", "src/Client/ClientBase.cpp", "tests/queries/0_stateless/02096_bad_options_in_client_and_local.reference", "tests/queries/0_stateless/02096_bad_options_in_client_and_local.sh", "tests/queries/0_stateless/02833_local_udf_options.reference", "tests/queries/0_stateless/02833_local_udf_options.sh", "tests/queries/0_stateless/scripts_udf/function.xml", "tests/queries/0_stateless/scripts_udf/udf.sh"] | Executable UDF path w/ clickhouse-local | Related to ancient https://github.com/ClickHouse/ClickHouse/issues/31188
It currently seems pretty much impossible to run executable UDFs using `clickhouse-local`. As its now well known, modern clickhouse demands UDF scripts to reside in a [user_scripts_path](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#server_configuration_parameters-user_scripts_path) and while this is simple with `clickhouse-server`, so far I could not find an equivalent SET parameter to convince `clickhouse-local` into using a custom user scripts folder location. On the other hand, the dynamic /tmp folder is impossible to use or guess.
Fictional example:
```
SELECT * FROM executable('./some_script.py', TabSeparated, 'name String', (SELECT name));
```
```
DB::Exception: Executable file ./some_script.py does not exist inside user scripts folder /tmp/clickhouse-local-2818615-1686929524-9014242541746612907/user_scripts/. (UNSUPPORTED_METHOD)
```
UDF functions (`CREATE FUNCTION test AS x -> (x + 1);SELECT test(1);`) work fine in comparison.
Any suggestions? | https://github.com/ClickHouse/ClickHouse/issues/51089 | https://github.com/ClickHouse/ClickHouse/pull/52643 | d6e2e8b92c5ab7dfb069c5688ed5069644880211 | ac2de3c79fb6637aa14aa3e9c254737234ca6cf4 | "2023-06-16T15:56:59Z" | c++ | "2023-07-28T07:21:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 51,083 | ["src/Dictionaries/DictionaryStructure.h", "tests/queries/0_stateless/02811_ip_dict_attribute.reference", "tests/queries/0_stateless/02811_ip_dict_attribute.sql"] | Ipv4 is not supported as a dictionary attribute anymore | https://fiddle.clickhouse.com/5b6d5838-ce20-4a42-adb1-587fdaffe3da
https://fiddle.clickhouse.com/18eca811-e125-418f-9031-6e0d108ce64c
```sql
create table src ( id UInt64, ip4 IPv4) Engine=Memory
as select * from values((1, '0.0.0.0'),(2, '10.10.10.10'));
CREATE DICTIONARY dict
(
id UInt64,
ip4 IPv4
)
PRIMARY KEY id
LAYOUT(HASHED())
SOURCE (CLICKHOUSE ( table src))
lifetime ( 10);
select dictGet('dict', 'ip4', arrayJoin([1,2]));
```
DB::Exception: Unknown type IPv4 for dictionary attribute. (UNKNOWN_TYPE) | https://github.com/ClickHouse/ClickHouse/issues/51083 | https://github.com/ClickHouse/ClickHouse/pull/51756 | f7d89309064f4c18aedfd7abcf3ee077c308dd76 | 9448d42aea6c5befae09cc923570fd6575a6d6f8 | "2023-06-16T14:13:07Z" | c++ | "2023-07-27T19:59:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,996 | ["tests/queries/0_stateless/02210_processors_profile_log.reference", "tests/queries/0_stateless/02210_processors_profile_log.sql"] | 02210_processors_profile_log is flaky | https://play.clickhouse.com/play?user=play#U0VMRUNUIHB1bGxfcmVxdWVzdF9udW1iZXIsIGNoZWNrX3N0YXJ0X3RpbWUsIGNoZWNrX25hbWUsIHRlc3RfbmFtZSwgdGVzdF9zdGF0dXMsIGNoZWNrX3N0YXR1cywgcmVwb3J0X3VybApGUk9NIGNoZWNrcwpXSEVSRSAxCiAgICBBTkQgcHVsbF9yZXF1ZXN0X251bWJlciA9IDAKICAgIEFORCB0ZXN0X3N0YXR1cyAhPSAnU0tJUFBFRCcKICAgIEFORCB0ZXN0X3N0YXR1cyAhPSAnT0snCiAgICBBTkQgY2hlY2tfc3RhdHVzICE9ICdzdWNjZXNzJwogICAgQU5EIHRlc3RfbmFtZSBsaWtlICclMDIyMTBfcHJvY2Vzc29yc19wcm9maWxlX2xvZyUnCk9SREVSIEJZIGNoZWNrX3N0YXJ0X3RpbWUgZGVzYywgY2hlY2tfbmFtZSwgdGVzdF9uYW1lCg==
https://s3.amazonaws.com/clickhouse-test-reports/0/8f9c74debb002b74180ad534e828714b49bba44a/stateless_tests__release_.html
https://s3.amazonaws.com/clickhouse-test-reports/0/cd3c4475a941ab3b30c0f366b4ba961ef26b00de/stateless_tests__release_.html
https://s3.amazonaws.com/clickhouse-test-reports/0/8dddec4c4cfa00d6d27a0785dd63bdae7d885b53/stateless_tests__release__s3_storage__[1_2].html | https://github.com/ClickHouse/ClickHouse/issues/50996 | https://github.com/ClickHouse/ClickHouse/pull/51641 | 32e0348caa6ee34d1f631fceffbc6a93b09953d2 | a3ae0cc385ef11277df3d588ec367fc7443b4cec | "2023-06-14T15:15:45Z" | c++ | "2023-07-05T21:45:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,846 | ["src/TableFunctions/TableFunctionS3.cpp"] | S3 table function does not work for pre-signed URL | **Use case**
Example:
```
SELECT count(*)
FROM s3('https://dpt-aggregations.s3.ca-central-1.amazonaws.com/export_4.parquet?response-content-disposition=inline&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEGMaCXVzLWVhc3QtMSJHMEUCIQDBDvlcLaeb86%2FL33VjL8OoOHSbipWuxEm57FDSFAEbxQIgVXTpYAL42jER5Y0SS4uy9uplYe7LzVyu8RCjZBQ29S4qngIIrP%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARADGgw1MTA4NjE5NzI3MDciDKBs9oYjrrWwXS8VBSryAQbYFWB2e1sVYrKkSZIsGRFIbxavVhV9sIxvswCUHiVkg8WuORPg8w50pvkkQ9d6yGiVGqWrG9y7xAhXE%2BZ6V2usVcQYAVx2Qrl6VS0YSt%2FifdK%2BCw1r%2Bxr4yOBM3KRyr%2FtCLm7wwbmp7GMsmm2WSjM2Re%2F6jlkv33BPgJYaJFiMBntEZY%2Fi7ktKWIBDiG8zZPLraj7NP%2Bs4Gum7Su2iEns%2B6EG6KfTdmlwYQ66R%2FntC8VCKd%2F8LUrjPlmLVQYkQ4bsfeXYofosivvFut4RAs6ZjoX98Uly0bDJhZQRGg5vsyoQv1S0yUecnQte7C91atTSwMMSKk6QGOt8BdtKSv6b1qjVWdjnregaGY7oPpBaz1t76kg986dFqjxL0wD7mF%2B4U%2FQRf7WPCW4jD9ixiJTF1hhsFALoQT0f5EbELsAEfxND0oLjHp7j21cBrjTm976Rp5GbTAjkOD8MmZSXJu1S%2FlHP565wRnUZOUemD8QOZnGWthzGHC75KzCTvG6eQC9PC%2Fd44XC8FaW59o61BffYu0RWuAV4mwQQs6EKkttiO1UGtdOE5btz6iKpqxQeOQiZ3vUxGYcA5U6CZPSD8NMdpxZKxdQLLk32rTLmTBHDnSmpiHBaulElaWA%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230610T191200Z&X-Amz-SignedHeaders=host&X-Amz-Expires=18000&X-Amz-Credential=ASIAXN4N7CTRWQP7MROI%2F20230610%2Fca-central-1%2Fs3%2Faws4_request&X-Amz-Signature=c78df78f76425fe6c01112812e83867810458caf30fc436417a28b809ed5dd8c')
```
| https://github.com/ClickHouse/ClickHouse/issues/50846 | https://github.com/ClickHouse/ClickHouse/pull/52310 | 22e5da914c739f425f70b9247c3c22f96e49de5b | f5e50b8b2894e6365b6acf57d4aa7355dd9493c9 | "2023-06-10T21:07:35Z" | c++ | "2023-07-28T09:39:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,836 | ["src/Common/AsynchronousMetrics.cpp"] | Normalized CPU metrics should use the max number of CPU cores from CGroups when they are set | **Use case**
Kubernetes
| https://github.com/ClickHouse/ClickHouse/issues/50836 | https://github.com/ClickHouse/ClickHouse/pull/50835 | 366bf3b3ec5b46f87832029daf4f607b90b72cc3 | 5cdf893f3ab1b0b80c974757ac5b054938285267 | "2023-06-10T16:05:21Z" | c++ | "2023-06-12T20:15:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,831 | ["programs/server/dashboard.html"] | Fill gaps on the embedded dashboard | Add
```
WITH FILL STEP {rounding:UInt32}
```
to the queries. | https://github.com/ClickHouse/ClickHouse/issues/50831 | https://github.com/ClickHouse/ClickHouse/pull/50832 | 0b85af3957f1df0842ed12622ca0c6de4b786c70 | 81b38db5295482d67a8f3a5bd53be066707e21c0 | "2023-06-10T12:28:52Z" | c++ | "2023-06-12T14:33:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,707 | ["src/Client/ClientBase.cpp", "src/Client/Suggest.cpp", "src/Parsers/ASTQueryWithOutput.cpp", "src/Parsers/ASTQueryWithOutput.h", "src/Parsers/ParserQueryWithOutput.cpp", "tests/queries/0_stateless/02050_clickhouse_client_local_exception.sh", "tests/queries/0_stateless/02346_into_outfile_and_stdout.sh"] | Bad usability of exception messages when a file already exists in INTO OUTFILE | **Describe the issue**
```
milovidov-desktop :) SELECT * FROM numbers(10) INTO OUTFILE 'numbers.tsv'
SELECT *
FROM numbers(10)
INTO OUTFILE 'numbers.tsv'
Query id: 4512f761-2b36-4a25-9ccd-22982a17eb64
Ok.
Exception on client:
Code: 76. DB::Exception: Code: 76. DB::ErrnoException: Cannot open file numbers.tsv, errno: 17, strerror: File exists. (CANNOT_OPEN_FILE) (version 23.5.1.1). (CANNOT_OPEN_FILE)
```
1. It prints Ok, but then prints the exception message.
2. It prints `Code: 76.` and `Exception` twice.
3. It prints `CANNOT_OPEN_FILE` twice.
4. It does not suggest the usage of the `APPEND` syntax. | https://github.com/ClickHouse/ClickHouse/issues/50707 | https://github.com/ClickHouse/ClickHouse/pull/50950 | 71678f64b184b3325820ab1f3882cbc5755cb196 | 4b02d83999808ab06320ba82a4530bb958e6b51f | "2023-06-08T12:13:14Z" | c++ | "2023-06-25T16:40:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,669 | ["src/Interpreters/ActionsDAG.cpp", "tests/queries/0_stateless/02812_bug_with_unused_join_columns.reference", "tests/queries/0_stateless/02812_bug_with_unused_join_columns.sql"] | LOGICAL_ERROR when use DROP unused join columns and filter push down at the same time in new analyzer | reproduce it on master by a very easy SQL
```
SELECT
concat(func.name, comb.name) AS x
FROM
system.functions AS func
JOIN system.aggregate_function_combinators AS comb using name
WHERE
is_aggregate settings allow_experimental_analyzer=1;
SELECT concat(func.name, comb.name) AS x
FROM system.functions AS func
INNER JOIN system.aggregate_function_combinators AS comb USING (name)
WHERE is_aggregate
Query id: bdb88424-dbb8-45b6-b278-f074af7e69b3
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 23.5.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of columns in chunk pushed to OutputPort. Expected 3, found 13
Header: func.is_aggregate_2 UInt8 Const(size = 0, UInt8(size = 1)), func.name_0 String String(size = 0), comb.name_1 String String(size = 0)
Chunk: Const(size = 0, UInt8(size = 1)) String(size = 0) UInt8(size = 0) String(size = 0) String(size = 0) Int8(size = 0) String(size = 0) String(size = 0) String(size = 0) String(size = 0) String(size = 0) String(size = 0) String(size = 0)
. (LOGICAL_ERROR)
```
https://github.com/ClickHouse/ClickHouse/pull/50430#issuecomment-1576860893 can help explain why it fails | https://github.com/ClickHouse/ClickHouse/issues/50669 | https://github.com/ClickHouse/ClickHouse/pull/51947 | 389aaf9db1e9beeda709d05e876bec936686b5a1 | 25aa6fcff9d89373a8b4991c4b6141e2acb9cbf2 | "2023-06-07T13:58:09Z" | c++ | "2023-07-07T23:42:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,626 | ["tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh"] | 01676_clickhouse_client_autocomplete is flaky | ```sql
SELECT
check_name,
test_name,
toStartOfDay(check_start_time) as t,
count() as runs,
100 * (countIf(test_status != 'OK' AND test_status != 'SKIPPED') AS f) / runs as failure_percentage
FROM checks
WHERE
test_name LIKE '01676_clickhouse_client_autocomplete%'
AND pull_request_number = 0
AND check_start_time > today() - interval 30 day
GROUP BY check_name, test_name, t
HAVING f > 0
ORDER by check_name, test_name, t
```
β | check_name | test_name | t | runs | failure_percentage
-- | -- | -- | -- | -- | --
1 | Stateless tests (release, analyzer) | 01676_clickhouse_client_autocomplete | 2023-06-02 00:00:00 | 34 | 29.41176470588235
2 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-07 00:00:00 | 26 | 7.6923076923076925
3 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-09 00:00:00 | 17 | 17.647058823529413
4 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-12 00:00:00 | 31 | 3.225806451612903
5 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-15 00:00:00 | 13 | 7.6923076923076925
6 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-17 00:00:00 | 8 | 12.5
7 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-21 00:00:00 | 5 | 20
8 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-22 00:00:00 | 16 | 6.25
9 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-23 00:00:00 | 22 | 4.545454545454546
10 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-05-24 00:00:00 | 18 | 5.555555555555555
11 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-06-01 00:00:00 | 23 | 4.3478260869565215
12 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-06-02 00:00:00 | 34 | 11.764705882352942
13 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-06-04 00:00:00 | 8 | 12.5
14 | Stateless tests (ubsan) [2/2] | 01676_clickhouse_client_autocomplete | 2023-06-06 00:00:00 | 6 | 16.666666666666668
Sample run: https://s3.amazonaws.com/clickhouse-test-reports/50594/29e886fdf4e90a48b303e49f15c276e04f3cc7d2/stateless_tests__ubsan__[2_2].html | https://github.com/ClickHouse/ClickHouse/issues/50626 | https://github.com/ClickHouse/ClickHouse/pull/50636 | 707abc85f46791fc816c52bb8feacad0dfd3ea96 | 1b1e3fbdd431a1b4eaf46f0cb15844ff8ded5143 | "2023-06-06T12:57:27Z" | c++ | "2023-06-06T23:36:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,619 | ["src/Server/GRPCServer.cpp", "src/Server/TCPHandler.cpp"] | 00534_functions_bad_arguments tests are flaky | It seems it's mostly related with MSAN (too slow? too many things at once?)
```
SELECT
check_name,
test_name,
toStartOfDay(check_start_time) as t,
count() as runs,
100 * (countIf(test_status != 'OK' AND test_status != 'SKIPPED') AS f) / runs as failure_percentage
FROM checks
WHERE
test_name LIKE '00534_functions_bad_arguments%'
AND pull_request_number = 0
AND check_start_time > today() - interval 30 day
GROUP BY check_name, test_name, t
HAVING f > 0
ORDER by check_name, test_name, t
```
β | check_name | test_name | t | runs | failure_percentage
-- | -- | -- | -- | -- | --
1 | Stateless tests (msan) [3/6] | 00534_functions_bad_arguments12 | 2023-06-02 00:00:00 | 32 | 3.125
2 | Stateless tests (msan) [3/6] | 00534_functions_bad_arguments12 | 2023-06-05 00:00:00 | 33 | 24.242424242424242
3 | Stateless tests (msan) [3/6] | 00534_functions_bad_arguments4_long | 2023-06-05 00:00:00 | 33 | 15.151515151515152
4 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments10 | 2023-06-02 00:00:00 | 30 | 20
5 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments10 | 2023-06-03 00:00:00 | 9 | 11.11111111111111
6 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments5 | 2023-06-02 00:00:00 | 30 | 20
7 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments5 | 2023-06-03 00:00:00 | 9 | 11.11111111111111
8 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments6 | 2023-06-02 00:00:00 | 30 | 20
9 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments6 | 2023-06-03 00:00:00 | 9 | 11.11111111111111
10 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments9 | 2023-06-02 00:00:00 | 30 | 20
11 | Stateless tests (msan) [4/6] | 00534_functions_bad_arguments9 | 2023-06-03 00:00:00 | 9 | 11.11111111111111
Example run:
https://s3.amazonaws.com/clickhouse-test-reports/50594/29e886fdf4e90a48b303e49f15c276e04f3cc7d2/stateless_tests__msan__[3_6].html
| https://github.com/ClickHouse/ClickHouse/issues/50619 | https://github.com/ClickHouse/ClickHouse/pull/51310 | eaf95306ed45a7875c37a36d125350c7f3e10462 | 75ef844f99a3af081978a7d905dcf2c214508237 | "2023-06-06T10:54:36Z" | c++ | "2023-06-23T11:51:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,582 | ["src/Processors/Transforms/FinishSortingTransform.cpp", "tests/queries/0_stateless/02815_fix_not_found_constants_col_in_block.reference", "tests/queries/0_stateless/02815_fix_not_found_constants_col_in_block.sql"] | Unexpected error: Not found column NULL in block | **Describe what's wrong**
The two SELECT statements should output the same results, but one of them throws an exception.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1 (commit eb5985e5fc0e83c94aa1af134f2718e9fe87979c)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/11823026-1268-4336-b744-7729f9025eeb
_Set up database_
```sql
create table t0 (vkey UInt32, c0 Float32, primary key(c0)) engine = AggregatingMergeTree;
insert into t0 values (19000, 1);
```
_SELECT statement 1_
```sql
select
null as c_2_0,
ref_2.c0 as c_2_1,
ref_2.vkey as c_2_2
from
t0 as ref_2
order by c_2_0 asc, c_2_1 asc, c_2_2 asc;
```
Then, I remove `c_2_2 asc` in the ORDER BY. The number of outputted rows should not be changed.
_SELECT statement 2_
```sql
select
null as c_2_0,
ref_2.c0 as c_2_1,
ref_2.vkey as c_2_2
from
t0 as ref_2
order by c_2_0 asc, c_2_1 asc;
```
**Expected behavior**
The two SELECT statements output the same results, or both of them throw the same exceptions.
**Actual behavior**
One outputs a row, and another one throws an exception.
SELECT statement 1 throws an exception:
```
Received exception from server (version 23.4.2):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column NULL in block. There are only columns: vkey, c0. (NOT_FOUND_COLUMN_IN_BLOCK)
(query: select
null as c_2_0,
ref_2.c0 as c_2_1,
ref_2.vkey as c_2_2
from
t0 as ref_2
order by c_2_0 asc, c_2_1 asc, c_2_2 asc;)
```
SELECT statement 2 outputs:
```
+-------+-------+-------+
| c_2_0 | c_2_1 | c_2_2 |
+-------+-------+-------+
| NULL | 1 | 19000 |
+-------+-------+-------+
```
**Additional context**
The earliest reproducible version is 22 in fiddle: https://fiddle.clickhouse.com/a7482d31-b9f8-44bc-88e7-30368913c35d
Before version 22, both SELECT statements output one row (e.g., 21.12.4.1 https://fiddle.clickhouse.com/9aedf68b-3928-4b23-ae48-e61d21ba75bf)
| https://github.com/ClickHouse/ClickHouse/issues/50582 | https://github.com/ClickHouse/ClickHouse/pull/52259 | 9280f4a9fda5b926461e63e9c9e66c26890071a0 | d0cb6596e1ca101fc8aa20ac1d0f43cab35e4b2d | "2023-06-05T10:33:48Z" | c++ | "2023-07-21T22:05:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,578 | ["src/Storages/StorageLog.cpp", "src/Storages/StorageStripeLog.cpp", "tests/queries/0_stateless/02771_log_faminy_truncate_count.reference", "tests/queries/0_stateless/02771_log_faminy_truncate_count.sql"] | LOG engine count() after TRUNCATE is not empty | > A clear and concise description of what works not as it is supposed to.
When I do TRUNCATE on LOG ENGINE table, it deletes records, but count() is not empty
> A link to reproducer
https://fiddle.clickhouse.com/9ee5ac13-56b9-4b78-9f61-66f0e8e49de6
**Does it reproduce on recent release?**
Yes
**How to reproduce**
* Which ClickHouse server version to use
23.4.2.11
* Which interface to use, if matters
fiddle
* `CREATE TABLE` statements for all tables involved
```
CREATE TABLE default.test_log
(
`crypto_name` String,
`trade_date` Date
)
ENGINE = Log
SETTINGS index_granularity = 8192;
```
* Sample data for all these tables
`INSERT INTO default.test_log (crypto_name, trade_date) VALUES ('abc', '2021-01-01'), ('def', '2022-02-02');`
* Queries to run that lead to unexpected result
```
TRUNCATE TABLE default.test_log;
SELECT count(*) FROM default.test_log;
```
2
**Expected behavior**
count is 0
> A clear and concise description of what you expected to happen.
as `select` from the table after `truncate` returns nothing, `count` should return 0
| https://github.com/ClickHouse/ClickHouse/issues/50578 | https://github.com/ClickHouse/ClickHouse/pull/50585 | 8f9c74debb002b74180ad534e828714b49bba44a | ad74189bc2ed4039b0cf129928141e13f6db435b | "2023-06-05T08:39:10Z" | c++ | "2023-06-09T11:32:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,570 | ["src/Functions/in.cpp", "tests/queries/0_stateless/02867_null_lc_in_bug.reference", "tests/queries/0_stateless/02867_null_lc_in_bug.sql"] | NULL::LowCardinality(Nullable(T)) NOT IN bug | **Describe what's wrong**
ClickHouse will return NULL value for operator NOT IN with 0-set
**Does it reproduce on recent release?**
Yes, 23.3
**How to reproduce**
```
SELECT lc
FROM
(
SELECT CAST(NULL, 'LowCardinality(Nullable(String))') AS lc
)
WHERE lc NOT IN (
SELECT *
FROM
(
SELECT materialize(CAST('', 'LowCardinality(Nullable(String))') AS lc)
)
WHERE 0
)
ββlcββββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββ
SELECT lc
FROM
(
SELECT materialize(CAST(NULL, 'Nullable(String)')) AS lc
)
WHERE lc NOT IN (
SELECT *
FROM
(
SELECT materialize(CAST('', 'LowCardinality(Nullable(String))') AS lc)
)
WHERE 0
)
Ok.
0 rows in set. Elapsed: 0.009 sec.
```
**Expected behavior**
Query will return 0 rows always
**Additional context**
Version with bug introducted is 21.6
https://fiddle.clickhouse.com/c8e03c2d-6879-4ae5-b565-a485d1c56f5d | https://github.com/ClickHouse/ClickHouse/issues/50570 | https://github.com/ClickHouse/ClickHouse/pull/53706 | ef3a6ee4fcfa8f76296bca04608e8924cc0ec613 | f522c247741cb74046863fa266533526fabc4a73 | "2023-06-05T04:18:20Z" | c++ | "2023-08-27T19:43:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,537 | ["src/Storages/StorageGenerateRandom.cpp", "tests/queries/0_stateless/00416_pocopatch_progress_in_http_headers.sh", "tests/queries/0_stateless/02539_generate_random_map.reference", "tests/queries/0_stateless/02586_generate_random_structure.reference"] | A query consumes a lot of RAM sometimes. | **How to reproduce**
```
SELECT timestamp_diff(( SELECT * FROM generateRandom() ) AS vx, ( SELECT * FROM hudi(['\0']) ) , (CAST((NULL) AS DateTime)));
```
Run several times.
| https://github.com/ClickHouse/ClickHouse/issues/50537 | https://github.com/ClickHouse/ClickHouse/pull/50538 | 50720760825e76a964c95aa4ba377c4ab147d0f5 | 18817517ed6f8849e3d979e10fbb273e0edf0eaa | "2023-06-04T02:01:14Z" | c++ | "2023-06-04T23:50:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,530 | ["src/Parsers/ExpressionElementParsers.cpp", "tests/queries/0_stateless/02782_inconsistent_formatting_and_constant_folding.reference", "tests/queries/0_stateless/02782_inconsistent_formatting_and_constant_folding.sql"] | Server crashes triggered at DB::FunctionMathUnary | **Describe what's wrong**
The SELECT statement makes the server crash.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1 (commit eb5985e5fc0e83c94aa1af134f2718e9fe87979c)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/528f832d-a684-4553-88ac-5ab0c660c4b3
_Set up database_
```sql
create table t4 (c26 String) engine = Log;
create view t7 as select max(ref_3.c26) as c_2_c46_1 from t4 as ref_3;
```
_SELECT statement_
```sql
select 1
from
(select
subq_0.c_7_c4585_14 as c_4_c4593_5
from
(select
avg(0) as c_7_c4572_1,
max(-0) as c_7_c4585_14
from
t7 as ref_0
group by ref_0.c_2_c46_1) as subq_0
) as subq_1
where subq_1.c_4_c4593_5 <= multiIf(true, 1, exp10(subq_1.c_4_c4593_5) <= 1, 1, 1);
```
**Expected behavior**
No crash.
**Actual behavior**
The server crashes.
The log:
```
[b6ac46c27d64] 2023.06.03 14:56:32.750565 [ 237202 ] <Fatal> BaseDaemon: ########################################
[b6ac46c27d64] 2023.06.03 14:56:32.750713 [ 237202 ] <Fatal> BaseDaemon: (version 23.5.1.1, build id: EF69E3BCB567A95743EEA735927D4FA6876C5FC2) (from thread 236859) (query_id: 06128d92-cc10-467a-ada3-a531eb45aca2) (query: select 1
from
(select
subq_0.c_7_c4585_14 as c_4_c4593_5
from
(select
avg(0) as c_7_c4572_1,
max(-0) as c_7_c4585_14
from
t7 as ref_0
group by ref_0.c_2_c46_1) as subq_0
) as subq_1
where subq_1.c_4_c4593_5 <= multiIf(true, 1, exp10(subq_1.c_4_c4593_5) <= 1, 1, 1);) Received signal Segmentation fault (11)
[b6ac46c27d64] 2023.06.03 14:56:32.750776 [ 237202 ] <Fatal> BaseDaemon: Address: 0x10. Access: read. Address not mapped to object.
[b6ac46c27d64] 2023.06.03 14:56:32.750828 [ 237202 ] <Fatal> BaseDaemon: Stack trace: 0x000000000e432754 0x000000000cb6924a 0x000000000cb68f0e 0x0000000015ff926f 0x0000000015ff9b0f 0x0000000015ffab79 0x00000000166d724b 0x00000000182f7a45 0x000000001843ead1 0x00000000170f62c0 0x00000000170eeaa2 0x00000000170ecfd4 0x0000000017171d38 0x0000000017172771 0x00000000174349e7 0x0000000017431cef 0x00000000180c8fa4 0x00000000180d92d9 0x000000001a695f07 0x000000001a6963ed 0x000000001a7fd0e7 0x000000001a7fad02 0x00007fbdfd11a609 0x00007fbdfd03f133
[b6ac46c27d64] 2023.06.03 14:56:34.088144 [ 237202 ] <Fatal> BaseDaemon: 3. DB::FunctionMathUnary<DB::UnaryFunctionVectorized<DB::(anonymous namespace)::Exp10Name, &preciseExp10(double)>>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000000e432754 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:35.398375 [ 237202 ] <Fatal> BaseDaemon: 4. DB::IFunction::executeImplDryRun(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000000cb6924a in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:36.702568 [ 237202 ] <Fatal> BaseDaemon: 5. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000000cb68f0e in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:36.723286 [ 237202 ] <Fatal> BaseDaemon: 6. ./build/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000015ff926f in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:36.744257 [ 237202 ] <Fatal> BaseDaemon: 7.1. inlined from ./build/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: intrusive_ptr
[b6ac46c27d64] 2023.06.03 14:56:36.744343 [ 237202 ] <Fatal> BaseDaemon: 7.2. inlined from ./build/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:122: boost::intrusive_ptr<DB::IColumn const>::operator=(boost::intrusive_ptr<DB::IColumn const>&&)
[b6ac46c27d64] 2023.06.03 14:56:36.744380 [ 237202 ] <Fatal> BaseDaemon: 7.3. inlined from ./build/./src/Common/COW.h:136: COW<DB::IColumn>::immutable_ptr<DB::IColumn>::operator=(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&&)
[b6ac46c27d64] 2023.06.03 14:56:36.744415 [ 237202 ] <Fatal> BaseDaemon: 7. ./build/./src/Functions/IFunction.cpp:302: DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000015ff9b0f in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:36.766996 [ 237202 ] <Fatal> BaseDaemon: 8. ./build/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000015ffab79 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.000715 [ 237202 ] <Fatal> BaseDaemon: 9.1. inlined from ./build/./src/Interpreters/ActionsDAG.cpp:0: DB::executeActionForHeader(DB::ActionsDAG::Node const*, std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>>)
[b6ac46c27d64] 2023.06.03 14:56:37.000827 [ 237202 ] <Fatal> BaseDaemon: 9. ./build/./src/Interpreters/ActionsDAG.cpp:654: DB::ActionsDAG::updateHeader(DB::Block) const @ 0x00000000166d724b in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.016996 [ 237202 ] <Fatal> BaseDaemon: 10. ./build/./src/Processors/Transforms/FilterTransform.cpp:44: DB::FilterTransform::transformHeader(DB::Block, DB::ActionsDAG const*, String const&, bool) @ 0x00000000182f7a45 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.041544 [ 237202 ] <Fatal> BaseDaemon: 11. ./build/./src/Processors/QueryPlan/FilterStep.cpp:0: DB::FilterStep::FilterStep(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&, String, bool) @ 0x000000001843ead1 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.223589 [ 237202 ] <Fatal> BaseDaemon: 12.1. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if<DB::FilterStep>::__unique_single std::make_unique[abi:v15000]<DB::FilterStep, DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&, String, bool&>(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&, String&&, bool&)
[b6ac46c27d64] 2023.06.03 14:56:37.223651 [ 237202 ] <Fatal> BaseDaemon: 12. ./build/./src/Interpreters/InterpreterSelectQuery.cpp:2543: DB::InterpreterSelectQuery::executeWhere(DB::QueryPlan&, std::shared_ptr<DB::ActionsDAG> const&, bool) @ 0x00000000170f62c0 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.401327 [ 237202 ] <Fatal> BaseDaemon: 13. ./build/./src/Interpreters/InterpreterSelectQuery.cpp:1706: DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x00000000170eeaa2 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.572939 [ 237202 ] <Fatal> BaseDaemon: 14.1. inlined from ./build/./contrib/llvm-project/libcxx/include/optional:260: ~__optional_destruct_base
[b6ac46c27d64] 2023.06.03 14:56:37.573016 [ 237202 ] <Fatal> BaseDaemon: 14. ./build/./src/Interpreters/InterpreterSelectQuery.cpp:886: DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x00000000170ecfd4 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.628330 [ 237202 ] <Fatal> BaseDaemon: 15. ./build/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:352: DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000017171d38 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.681823 [ 237202 ] <Fatal> BaseDaemon: 16.1. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:603: shared_ptr<DB::Context, void>
[b6ac46c27d64] 2023.06.03 14:56:37.681899 [ 237202 ] <Fatal> BaseDaemon: 16. ./build/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:380: DB::InterpreterSelectWithUnionQuery::execute() @ 0x0000000017172771 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.776182 [ 237202 ] <Fatal> BaseDaemon: 17. ./build/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000174349e7 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.878183 [ 237202 ] <Fatal> BaseDaemon: 18. ./build/./src/Interpreters/executeQuery.cpp:1180: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x0000000017431cef in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:37.943916 [ 237202 ] <Fatal> BaseDaemon: 19. ./build/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x00000000180c8fa4 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:38.062867 [ 237202 ] <Fatal> BaseDaemon: 20. ./build/./src/Server/TCPHandler.cpp:2045: DB::TCPHandler::run() @ 0x00000000180d92d9 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:38.066402 [ 237202 ] <Fatal> BaseDaemon: 21. ./build/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001a695f07 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:38.071875 [ 237202 ] <Fatal> BaseDaemon: 22.1. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15000](Poco::Net::TCPServerConnection*) const
[b6ac46c27d64] 2023.06.03 14:56:38.071948 [ 237202 ] <Fatal> BaseDaemon: 22.2. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::unique_ptr<Poco::Net::TCPServerConnection, std::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
[b6ac46c27d64] 2023.06.03 14:56:38.071992 [ 237202 ] <Fatal> BaseDaemon: 22.3. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
[b6ac46c27d64] 2023.06.03 14:56:38.072026 [ 237202 ] <Fatal> BaseDaemon: 22. ./build/./base/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x000000001a6963ed in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:38.078567 [ 237202 ] <Fatal> BaseDaemon: 23. ./build/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001a7fd0e7 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:38.084298 [ 237202 ] <Fatal> BaseDaemon: 24.1. inlined from ./build/./base/poco/Foundation/include/Poco/SharedPtr.h:139: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
[b6ac46c27d64] 2023.06.03 14:56:38.084368 [ 237202 ] <Fatal> BaseDaemon: 24.2. inlined from ./build/./base/poco/Foundation/include/Poco/SharedPtr.h:180: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
[b6ac46c27d64] 2023.06.03 14:56:38.084400 [ 237202 ] <Fatal> BaseDaemon: 24. ./build/./base/poco/Foundation/src/Thread_POSIX.cpp:350: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001a7fad02 in /usr/bin/clickhouse
[b6ac46c27d64] 2023.06.03 14:56:38.084446 [ 237202 ] <Fatal> BaseDaemon: 25. ? @ 0x00007fbdfd11a609 in ?
[b6ac46c27d64] 2023.06.03 14:56:38.084494 [ 237202 ] <Fatal> BaseDaemon: 26. clone @ 0x00007fbdfd03f133 in ?
[b6ac46c27d64] 2023.06.03 14:56:38.084533 [ 237202 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 23.5.1.1)
(query: select 1
from
(select
subq_0.c_7_c4585_14 as c_4_c4593_5
from
(select
avg(0) as c_7_c4572_1,
max(-0) as c_7_c4585_14
from
t7 as ref_0
group by ref_0.c_2_c46_1) as subq_0
) as subq_1
where subq_1.c_4_c4593_5 <= multiIf(true, 1, exp10(subq_1.c_4_c4593_5) <= 1, 1, 1);)
```
**Additional context**
The earliest reproducible version is 21 in fiddle: https://fiddle.clickhouse.com/789af690-f306-431c-ab68-9572a58bfaca
| https://github.com/ClickHouse/ClickHouse/issues/50530 | https://github.com/ClickHouse/ClickHouse/pull/50536 | 6241ea35145d1d88f218a17ea3454736037dcd6b | 091d6d02f7c0ecbb8708973d9199a839e7ea6d50 | "2023-06-03T15:05:30Z" | c++ | "2023-06-05T01:35:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,459 | ["programs/client/Client.cpp", "tests/queries/0_stateless/01317_no_password_in_command_line.reference", "tests/queries/0_stateless/01317_no_password_in_command_line.sh"] | Specifying --password "secret" --password in client asks user to enter password | ClickHouse client version 23.5.1.1
**Describe the unexpected behavior.**
Specifying `--password "secret" --password` asks the user to enter the password instead of failing in the console output.
**Repro**
`clickhouse-client --password "secret" --password`
**Expected behavior**
An error: `Bad arguments: option '--password' cannot be specified more than once`
**Error message and/or stacktrace**
No error message. The client just ignores the "secret" password, and uses implicit_value "" from the boost options configuration.
If the password is an empty string, then the client asks the user to enter the password (even if the user does not have a password). | https://github.com/ClickHouse/ClickHouse/issues/50459 | https://github.com/ClickHouse/ClickHouse/pull/50966 | c1faf42481b3ff9b2bb8702405760577d56a3f03 | 946e1a8c6f34209cb1c9fed46e7e6aed4420ae09 | "2023-06-02T05:59:42Z" | c++ | "2023-06-15T09:14:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,456 | ["src/Common/TaskStatsInfoGetter.cpp", "src/Common/ThreadStatus.h"] | Linux 6.1.22 on RISC-V failed to support TaskStats interface | > I've successfully built ClickHouse on today's master but cannot check if it will run. I give about a 30% chance for it.
https://drive.google.com/file/d/1LHwtaPBJlSQYPppPMfUbML99H9UJKy4r/view?usp=sharing
I've started the server of this version of clickhouse on risc-v. But when I execute the SQL, I got this error:
server:
<img width="712" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/37319050/f6397557-d067-4416-a0f8-990068d0fe38">
client:
<img width="709" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/37319050/1a0bfcbd-ba5e-4b13-9704-820ed0c95795">
Operating system
`Linux fedora-riscv 6.1.22 #1 SMP Tue May 9 00:07:19 EDT 2023 riscv64 GNU/Linux`
@alexey-milovidov @ernado
Any help is much appreciated!
_Originally posted by @Joeywzr in https://github.com/ClickHouse/ClickHouse/issues/40141#issuecomment-1571500633_
| https://github.com/ClickHouse/ClickHouse/issues/50456 | https://github.com/ClickHouse/ClickHouse/pull/50457 | f0bfd44e13cd09b12f0add52097fa0041ba31594 | c97f1735671a49293614162ae65f6df612009e40 | "2023-06-02T01:24:15Z" | c++ | "2023-06-02T14:50:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,381 | ["src/CMakeLists.txt"] | Linker failed with 23.4, error: undefined symbol: make_fcontext | 1. clang & cmake version
```
Ubuntu clang version 16.0.5 (++20230521064343+5729e63ac7b4-1~exp1~20230521184435.94)
cmake version 3.26.4
```
2. code branch
```
commit edcd8ed14cd6f09a3056a9dac8c1329eb28df9f5 (HEAD -> 23.4, origin/23.4, coding/23.4)
Merge: cb2a4617e54 0ab664b7fbd
Author: Nikita Mikhaylov <[email protected]>
Date: Fri May 26 15:58:16 2023 +0200
```
3. update submodule
```
git submodule sync
git submodule init
git submodule update -f
git submodule update --init -f
```
4. ninja detail information
```
FAILED: utils/config-processor/config-processor
: && /usr/bin/clang++-16 --target=x86_64-linux-gnu --sysroot=/code/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/code/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -Xclang -fuse-ctor-homing -Wno-enum-constexpr-conversion -fsized-deallocation -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/code/ClickHouse=. -falign-functions=32 -mbranches-within-32B-boundaries -fdiagnostics-absolute-paths -fstrict-vtable-pointers -Wall -Wextra -Wframe-larger-than=65536 -Weverything -Wpedantic -Wno-zero-length-array -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c++20-compat -Wno-sign-conversion -Wno-implicit-int-conversion -Wno-implicit-int-float-conversion -Wno-ctad-maybe-unsupported -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-padded -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -Wno-thread-safety-negative -Wno-enum-constexpr-conversion -Wno-unsafe-buffer-usage -O2 -g -DNDEBUG -O3 -g -gdwarf-4 -fno-pie --gcc-toolchain=/code/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/code/ClickHouse/build_23.4/ld.lld -rdynamic -Wl,--gdb-index -Wl,--build-id=sha1 -no-pie -Wl,-no-pie -Xlinker --no-undefined src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/config-processor/CMakeFiles/config-processor.dir/config-processor.cpp.o -o utils/config-processor/config-processor src/libclickhouse_new_delete.a src/Common/Config/libclickhouse_common_config_no_zookeeper_log.a src/Common/ZooKeeper/libclickhouse_common_zookeeper_no_log.a src/libclickhouse_common_io.a contrib/jemalloc-cmake/lib_jemalloc.a contrib/gwpasan-cmake/lib_gwp_asan.a base/widechar_width/libwidechar_width.a contrib/dragonbox-cmake/lib_dragonbox_to_chars.a contrib/boost-cmake/lib_boost_program_options.a contrib/re2-cmake/libre2.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/lib_cpuid.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/c-ares-cmake/lib_c-ares.a contrib/aws-cmake/lib_aws.a contrib/brotli-cmake/lib_brotli.a contrib/snappy-cmake/lib_snappy.a contrib/liburing-cmake/lib_liburing.a contrib/minizip-ng-cmake/lib_minizip.a contrib/zstd-cmake/lib_zstd.a contrib/xz-cmake/lib_liblzma.a contrib/bzip2-cmake/lib_bzip2.a base/base/libcommon.a contrib/boost-cmake/lib_boost_system.a contrib/cityhash102/lib_cityhash.a base/poco/NetSSL_OpenSSL/lib_poco_net_ssl.a base/poco/Net/lib_poco_net.a base/poco/Crypto/lib_poco_crypto.a contrib/boringssl-cmake/lib_ssl.a contrib/boringssl-cmake/lib_crypto.a -lpthread base/poco/Util/lib_poco_util.a base/poco/JSON/lib_poco_json.a base/poco/JSON/lib_poco_json_pdjson.a contrib/replxx-cmake/lib_replxx.a contrib/cctz-cmake/lib_cctz.a -Wl,--whole-archive /code/ClickHouse/build_23.4/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/lib_fmt.a base/poco/XML/lib_poco_xml.a base/poco/Foundation/lib_poco_foundation.a contrib/double-conversion-cmake/lib_double-conversion.a contrib/zlib-ng-cmake/lib_zlib.a contrib/lz4-cmake/lib_lz4.a base/poco/Foundation/lib_poco_foundation_pcre.a base/poco/XML/lib_poco_xml_expat.a src/Common/StringUtils/libstring_utils.a contrib/yaml-cpp-cmake/lib_yaml_cpp.a base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a -Wl,--start-group contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-16/lib/clang/16/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-16: error: undefined symbol: make_fcontext
>>> referenced by fiber_fcontext.hpp:168 (./contrib/boost/boost/context/fiber_fcontext.hpp:168)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::AsyncTaskExecutor(std::__1::unique_ptr<DB::AsyncTask, std::__1::default_delete<DB::AsyncTask>>)) in archive src/libclickhouse_common_io.a
>>> referenced by fiber_fcontext.hpp:168 (./contrib/boost/boost/context/fiber_fcontext.hpp:168)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::createFiber()) in archive src/libclickhouse_common_io.a
>>> referenced by fiber_fcontext.hpp:168 (./contrib/boost/boost/context/fiber_fcontext.hpp:168)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::restart()) in archive src/libclickhouse_common_io.a
ld.lld-16: error: undefined symbol: jump_fcontext
>>> referenced by fiber_fcontext.hpp:171 (./contrib/boost/boost/context/fiber_fcontext.hpp:171)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::AsyncTaskExecutor(std::__1::unique_ptr<DB::AsyncTask, std::__1::default_delete<DB::AsyncTask>>)) in archive src/libclickhouse_common_io.a
>>> referenced by fiber_fcontext.hpp:171 (./contrib/boost/boost/context/fiber_fcontext.hpp:171)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::createFiber()) in archive src/libclickhouse_common_io.a
>>> referenced by fiber_fcontext.hpp:279 (./contrib/boost/boost/context/fiber_fcontext.hpp:279)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::resume()) in archive src/libclickhouse_common_io.a
>>> referenced 5 more times
ld.lld-16: error: undefined symbol: ontop_fcontext
>>> referenced by fiber_fcontext.hpp:251 (./contrib/boost/boost/context/fiber_fcontext.hpp:251)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::AsyncTaskExecutor(std::__1::unique_ptr<DB::AsyncTask, std::__1::default_delete<DB::AsyncTask>>)) in archive src/libclickhouse_common_io.a
>>> referenced by fiber_fcontext.hpp:251 (./contrib/boost/boost/context/fiber_fcontext.hpp:251)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::createFiber()) in archive src/libclickhouse_common_io.a
>>> referenced by fiber_fcontext.hpp:251 (./contrib/boost/boost/context/fiber_fcontext.hpp:251)
>>> AsyncTaskExecutor.cpp.o:(DB::AsyncTaskExecutor::resume()) in archive src/libclickhouse_common_io.a
>>> referenced 11 more times
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[3246/3284] Building CXX object src/AggregateFunctions/CMakeFiles/clickhouse_aggregate_functions.dir/AggregateFunctionSumMap.cpp.o
ninja: build stopped: subcommand failed.
[1]+ Done clear
``` | https://github.com/ClickHouse/ClickHouse/issues/50381 | https://github.com/ClickHouse/ClickHouse/pull/50385 | aa19b2fc407e2b0452a8fd23ca623febcec43d8c | 28862129a56dc78be77eff000915924ba41e81fa | "2023-05-31T08:16:23Z" | c++ | "2023-06-04T01:02:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,323 | ["tests/queries/0_stateless/02789_jit_cannot_convert_column.reference", "tests/queries/0_stateless/02789_jit_cannot_convert_column.sql", "tests/queries/0_stateless/02790_jit_wrong_result.reference", "tests/queries/0_stateless/02790_jit_wrong_result.sql"] | Crash at llvm::X86InstrInfo::copyPhysReg | **Describe what's wrong**
The SELECT statement makes the server crash.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1 (commit 806ef08e5f6b3fc93993d19aac505290e9bfa554)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/d399218b-824f-41b9-9d6e-fb824152c759
_Set up database_
```sql
create table t1 (pkey UInt32, c8 UInt32, c9 String, c10 Float32, c11 String, primary key(c8)) engine = ReplacingMergeTree;
create table t3 (vkey UInt32, pkey UInt32, c15 UInt32) engine = Log;
```
_Bug-triggering query_
```sql
with cte_4 as (
select
ref_10.c11 as c_2_c2350_1,
ref_9.c9 as c_2_c2351_2
from
t1 as ref_9
right outer join t1 as ref_10
on (ref_9.c11 = ref_10.c9)
inner join t3 as ref_11
on (ref_10.c8 = ref_11.vkey)
where ((ref_10.pkey + ref_11.pkey) between ref_11.vkey and (case when (-30.87 >= ref_9.c10) then ref_11.c15 else ref_11.pkey end)))
select
ref_13.c_2_c2350_1 as c_2_c2357_3
from
cte_4 as ref_13
where (ref_13.c_2_c2351_2) in (
select
ref_14.c_2_c2351_2 as c_5_c2352_0
from
cte_4 as ref_14);
```
**Expected behavior**
No crash.
**Actual behavior**
The server crashes.
The log:
```
[a42dd51a4b55] 2023.05.29 17:47:57.669009 [ 1780230 ] <Fatal> BaseDaemon: ########################################
[a42dd51a4b55] 2023.05.29 17:47:57.669257 [ 1780230 ] <Fatal> BaseDaemon: (version 23.5.1.1, build id: 91219368B241C6A86EEC6081927CBB6262DAE94A) (from thread 1779924) (query_id: 925605fe-1d79-40fd-8e27-39136e849fa0) (query: with cte_4 as (
select
ref_10.c11 as c_2_c2350_1,
ref_9.c9 as c_2_c2351_2
from
t1 as ref_9
right outer join t1 as ref_10
on (ref_9.c11 = ref_10.c9)
inner join t3 as ref_11
on (ref_10.c8 = ref_11.vkey)
where ((ref_10.pkey + ref_11.pkey) between ref_11.vkey and (case when (-30.87 >= ref_9.c10) then ref_11.c15 else ref_11.pkey end)))
select
ref_13.c_2_c2350_1 as c_2_c2357_3
from
cte_4 as ref_13
where (ref_13.c_2_c2351_2) in (
select
ref_14.c_2_c2351_2 as c_5_c2352_0
from
cte_4 as ref_14);) Received signal Aborted (6)
[a42dd51a4b55] 2023.05.29 17:47:57.669393 [ 1780230 ] <Fatal> BaseDaemon:
[a42dd51a4b55] 2023.05.29 17:47:57.669460 [ 1780230 ] <Fatal> BaseDaemon: Stack trace: 0x00007f569f5f500b 0x00007f569f5d4859 0x000000001a294cd8 0x000000001a294b86 0x0000000018e3f1b5 0x000000001955baa5 0x00000000193a3f4c 0x000000001a1a8b8a 0x000000001a1b0793 0x000000001a1a97a8 0x00000000174b0eea 0x00000000174afd3c 0x00000000174afa57 0x00000000174bb69e 0x0000000016bc5e06 0x00000000166d4b29 0x0000000016852bfc 0x00000000184348f9 0x000000001843758b 0x000000001844d623 0x00000000171690c3 0x000000001742aee7 0x00000000174281ef 0x00000000180bde24 0x00000000180ce159 0x000000001a68b547 0x000000001a68ba2d 0x000000001a7f2727 0x000000001a7f0342 0x00007f569f7ac609 0x00007f569f6d1133
[a42dd51a4b55] 2023.05.29 17:47:57.669540 [ 1780230 ] <Fatal> BaseDaemon: 3. raise @ 0x00007f569f5f500b in ?
[a42dd51a4b55] 2023.05.29 17:47:57.669604 [ 1780230 ] <Fatal> BaseDaemon: 4. abort @ 0x00007f569f5d4859 in ?
[a42dd51a4b55] 2023.05.29 17:47:59.098108 [ 1780230 ] <Fatal> BaseDaemon: 5. llvm::report_fatal_error(llvm::Twine const&, bool) @ 0x000000001a294cd8 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:00.516065 [ 1780230 ] <Fatal> BaseDaemon: 6. ? @ 0x000000001a294b86 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:01.869262 [ 1780230 ] <Fatal> BaseDaemon: 7. llvm::X86InstrInfo::copyPhysReg(llvm::MachineBasicBlock&, llvm::MachineInstrBundleIterator<llvm::MachineInstr, false>, llvm::DebugLoc const&, llvm::MCRegister, llvm::MCRegister, bool) const @ 0x0000000018e3f1b5 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:03.209189 [ 1780230 ] <Fatal> BaseDaemon: 8. (anonymous namespace)::ExpandPostRA::runOnMachineFunction(llvm::MachineFunction&) @ 0x000000001955baa5 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:04.634853 [ 1780230 ] <Fatal> BaseDaemon: 9. llvm::MachineFunctionPass::runOnFunction(llvm::Function&) @ 0x00000000193a3f4c in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:05.983760 [ 1780230 ] <Fatal> BaseDaemon: 10. llvm::FPPassManager::runOnFunction(llvm::Function&) @ 0x000000001a1a8b8a in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:07.316860 [ 1780230 ] <Fatal> BaseDaemon: 11. llvm::FPPassManager::runOnModule(llvm::Module&) @ 0x000000001a1b0793 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:08.701261 [ 1780230 ] <Fatal> BaseDaemon: 12. llvm::legacy::PassManagerImpl::run(llvm::Module&) @ 0x000000001a1a97a8 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:08.741010 [ 1780230 ] <Fatal> BaseDaemon: 13. ./build/./src/Interpreters/JIT/CHJIT.cpp:0: DB::JITCompiler::compile(llvm::Module&) @ 0x00000000174b0eea in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:08.778865 [ 1780230 ] <Fatal> BaseDaemon: 14. ./build/./src/Interpreters/JIT/CHJIT.cpp:0: DB::CHJIT::compileModule(std::unique_ptr<llvm::Module, std::default_delete<llvm::Module>>) @ 0x00000000174afd3c in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:08.811988 [ 1780230 ] <Fatal> BaseDaemon: 15. ./build/./src/Interpreters/JIT/CHJIT.cpp:361: DB::CHJIT::compileModule(std::function<void (llvm::Module&)>) @ 0x00000000174afa57 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:08.843042 [ 1780230 ] <Fatal> BaseDaemon: 16.1. inlined from ./build/./contrib/llvm-project/libcxx/include/__functional/function.h:818: ~__policy_func
[a42dd51a4b55] 2023.05.29 17:48:08.843208 [ 1780230 ] <Fatal> BaseDaemon: 16.2. inlined from ./build/./contrib/llvm-project/libcxx/include/__functional/function.h:1174: ~function
[a42dd51a4b55] 2023.05.29 17:48:08.843259 [ 1780230 ] <Fatal> BaseDaemon: 16. ./build/./src/Interpreters/JIT/compileFunction.cpp:167: DB::compileFunction(DB::CHJIT&, DB::IFunctionBase const&) @ 0x00000000174bb69e in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:08.881692 [ 1780230 ] <Fatal> BaseDaemon: 17.1. inlined from ./build/./contrib/llvm-project/libcxx/include/new:246: void* std::__libcpp_operator_new[abi:v15000]<unsigned long>(unsigned long)
[a42dd51a4b55] 2023.05.29 17:48:08.881856 [ 1780230 ] <Fatal> BaseDaemon: 17.2. inlined from ./build/./contrib/llvm-project/libcxx/include/new:272: std::__libcpp_allocate[abi:v15000](unsigned long, unsigned long)
[a42dd51a4b55] 2023.05.29 17:48:08.881956 [ 1780230 ] <Fatal> BaseDaemon: 17.3. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/allocator.h:112: std::allocator<std::__shared_ptr_emplace<DB::CompiledFunctionHolder, std::allocator<DB::CompiledFunctionHolder>>>::allocate[abi:v15000](unsigned long)
[a42dd51a4b55] 2023.05.29 17:48:08.882017 [ 1780230 ] <Fatal> BaseDaemon: 17.4. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:262: std::allocator_traits<std::allocator<std::__shared_ptr_emplace<DB::CompiledFunctionHolder, std::allocator<DB::CompiledFunctionHolder>>>>::allocate[abi:v15000](std::allocator<std::__shared_ptr_emplace<DB::CompiledFunctionHolder, std::allocator<DB::CompiledFunctionHolder>>>&, unsigned long)
[a42dd51a4b55] 2023.05.29 17:48:08.882079 [ 1780230 ] <Fatal> BaseDaemon: 17.5. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/allocation_guard.h:53: __allocation_guard<std::allocator<DB::CompiledFunctionHolder> >
[a42dd51a4b55] 2023.05.29 17:48:08.882136 [ 1780230 ] <Fatal> BaseDaemon: 17.6. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:952: std::shared_ptr<DB::CompiledFunctionHolder> std::allocate_shared[abi:v15000]<DB::CompiledFunctionHolder, std::allocator<DB::CompiledFunctionHolder>, DB::CompiledFunction&, void>(std::allocator<DB::CompiledFunctionHolder> const&, DB::CompiledFunction&)
[a42dd51a4b55] 2023.05.29 17:48:08.882201 [ 1780230 ] <Fatal> BaseDaemon: 17.7. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:962: std::shared_ptr<DB::CompiledFunctionHolder> std::make_shared[abi:v15000]<DB::CompiledFunctionHolder, DB::CompiledFunction&, void>(DB::CompiledFunction&)
[a42dd51a4b55] 2023.05.29 17:48:08.882256 [ 1780230 ] <Fatal> BaseDaemon: 17.8. inlined from ./build/./src/Interpreters/ExpressionJIT.cpp:304: operator()
[a42dd51a4b55] 2023.05.29 17:48:08.882311 [ 1780230 ] <Fatal> BaseDaemon: 17.9. inlined from ./build/./src/Common/CacheBase.h:148: std::pair<std::shared_ptr<DB::CompiledExpressionCacheEntry>, bool> DB::CacheBase<wide::integer<128ul, unsigned int>, DB::CompiledExpressionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::getOrSet<DB::compile(DB::CompileDAG const&, unsigned long)::$_0>(wide::integer<128ul, unsigned int> const&, DB::compile(DB::CompileDAG const&, unsigned long)::$_0&&)
[a42dd51a4b55] 2023.05.29 17:48:08.882380 [ 1780230 ] <Fatal> BaseDaemon: 17.10. inlined from ./build/./src/Interpreters/ExpressionJIT.cpp:300: DB::compile(DB::CompileDAG const&, unsigned long)
[a42dd51a4b55] 2023.05.29 17:48:08.882459 [ 1780230 ] <Fatal> BaseDaemon: 17. ./build/./src/Interpreters/ExpressionJIT.cpp:593: DB::ActionsDAG::compileFunctions(unsigned long, std::unordered_set<DB::ActionsDAG::Node const*, std::hash<DB::ActionsDAG::Node const*>, std::equal_to<DB::ActionsDAG::Node const*>, std::allocator<DB::ActionsDAG::Node const*>> const&) @ 0x0000000016bc5e06 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.196679 [ 1780230 ] <Fatal> BaseDaemon: 18. ./build/./src/Interpreters/ActionsDAG.cpp:1070: DB::ActionsDAG::compileExpressions(unsigned long, std::unordered_set<DB::ActionsDAG::Node const*, std::hash<DB::ActionsDAG::Node const*>, std::equal_to<DB::ActionsDAG::Node const*>, std::allocator<DB::ActionsDAG::Node const*>> const&) @ 0x00000000166d4b29 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.262428 [ 1780230 ] <Fatal> BaseDaemon: 19. ./build/./src/Interpreters/ExpressionActions.cpp:0: DB::ExpressionActions::ExpressionActions(std::shared_ptr<DB::ActionsDAG>, DB::ExpressionActionsSettings const&) @ 0x0000000016852bfc in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.286035 [ 1780230 ] <Fatal> BaseDaemon: 20. ./build/./src/Processors/QueryPlan/FilterStep.cpp:57: DB::FilterStep::transformPipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x00000000184348f9 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.303390 [ 1780230 ] <Fatal> BaseDaemon: 21. ./build/./src/Processors/QueryPlan/ITransformingStep.cpp:0: DB::ITransformingStep::updatePipeline(std::vector<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>, std::allocator<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x000000001843758b in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.347798 [ 1780230 ] <Fatal> BaseDaemon: 22.1. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>::reset[abi:v15000](DB::QueryPipelineBuilder*)
[a42dd51a4b55] 2023.05.29 17:48:09.347968 [ 1780230 ] <Fatal> BaseDaemon: 22.2. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:225: std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>::operator=[abi:v15000](std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>&&)
[a42dd51a4b55] 2023.05.29 17:48:09.348027 [ 1780230 ] <Fatal> BaseDaemon: 22. ./build/./src/Processors/QueryPlan/QueryPlan.cpp:189: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x000000001844d623 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.403658 [ 1780230 ] <Fatal> BaseDaemon: 23. ./build/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:379: DB::InterpreterSelectWithUnionQuery::execute() @ 0x00000000171690c3 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.494530 [ 1780230 ] <Fatal> BaseDaemon: 24. ./build/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000001742aee7 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.595519 [ 1780230 ] <Fatal> BaseDaemon: 25. ./build/./src/Interpreters/executeQuery.cpp:1180: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000174281ef in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.666398 [ 1780230 ] <Fatal> BaseDaemon: 26. ./build/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x00000000180bde24 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.781625 [ 1780230 ] <Fatal> BaseDaemon: 27. ./build/./src/Server/TCPHandler.cpp:2045: DB::TCPHandler::run() @ 0x00000000180ce159 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.785181 [ 1780230 ] <Fatal> BaseDaemon: 28. ./build/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000001a68b547 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.790956 [ 1780230 ] <Fatal> BaseDaemon: 29.1. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15000](Poco::Net::TCPServerConnection*) const
[a42dd51a4b55] 2023.05.29 17:48:09.791106 [ 1780230 ] <Fatal> BaseDaemon: 29.2. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::unique_ptr<Poco::Net::TCPServerConnection, std::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
[a42dd51a4b55] 2023.05.29 17:48:09.791166 [ 1780230 ] <Fatal> BaseDaemon: 29.3. inlined from ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
[a42dd51a4b55] 2023.05.29 17:48:09.791221 [ 1780230 ] <Fatal> BaseDaemon: 29. ./build/./base/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x000000001a68ba2d in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.797885 [ 1780230 ] <Fatal> BaseDaemon: 30. ./build/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x000000001a7f2727 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.803906 [ 1780230 ] <Fatal> BaseDaemon: 31.1. inlined from ./build/./base/poco/Foundation/include/Poco/SharedPtr.h:139: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
[a42dd51a4b55] 2023.05.29 17:48:09.804065 [ 1780230 ] <Fatal> BaseDaemon: 31.2. inlined from ./build/./base/poco/Foundation/include/Poco/SharedPtr.h:180: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
[a42dd51a4b55] 2023.05.29 17:48:09.804116 [ 1780230 ] <Fatal> BaseDaemon: 31. ./build/./base/poco/Foundation/src/Thread_POSIX.cpp:350: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001a7f0342 in /usr/bin/clickhouse
[a42dd51a4b55] 2023.05.29 17:48:09.804233 [ 1780230 ] <Fatal> BaseDaemon: 32. ? @ 0x00007f569f7ac609 in ?
[a42dd51a4b55] 2023.05.29 17:48:09.804315 [ 1780230 ] <Fatal> BaseDaemon: 33. clone @ 0x00007f569f6d1133 in ?
[a42dd51a4b55] 2023.05.29 17:48:09.804386 [ 1780230 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 23.5.1.1)
(query: with cte_4 as (
select
ref_10.c11 as c_2_c2350_1,
ref_9.c9 as c_2_c2351_2
from
t1 as ref_9
right outer join t1 as ref_10
on (ref_9.c11 = ref_10.c9)
inner join t3 as ref_11
on (ref_10.c8 = ref_11.vkey)
where ((ref_10.pkey + ref_11.pkey) between ref_11.vkey and (case when (-30.87 >= ref_9.c10) then ref_11.c15 else ref_11.pkey end)))
select
ref_13.c_2_c2350_1 as c_2_c2357_3
from
cte_4 as ref_13
where (ref_13.c_2_c2351_2) in (
select
ref_14.c_2_c2351_2 as c_5_c2352_0
from
cte_4 as ref_14);)
```
**Additional context**
The earliest reproducible version is 22.10 in fiddle: https://fiddle.clickhouse.com/3e5f6a67-80cf-415e-bfc1-d5f18cbf4313
| https://github.com/ClickHouse/ClickHouse/issues/50323 | https://github.com/ClickHouse/ClickHouse/pull/51113 | db82e94e68c48dd01a2e91be597cbedc7b56a188 | e28dc5d61c924992ddd0066e0e5e5bb05b848db3 | "2023-05-29T18:01:34Z" | c++ | "2023-12-08T02:27:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,311 | ["docs/en/sql-reference/functions/hash-functions.md", "src/Functions/FunctionsHashing.h", "src/Functions/FunctionsHashingMisc.cpp", "tests/queries/0_stateless/02534_keyed_siphash.reference", "tests/queries/0_stateless/02534_keyed_siphash.sql", "tests/queries/0_stateless/02552_siphash128_reference.reference", "tests/queries/0_stateless/02552_siphash128_reference.sql"] | sipHash64Keyed not same sipHash64 | Testing queries (query 6 return Error):
//------------------------- ΠΠ°ΠΏΡΠΎΡ 1 -------------------------
DROP TABLE IF EXISTS sipHash64Keyed_test
;
//------------------------- ΠΠ°ΠΏΡΠΎΡ 2 -------------------------
CREATE TABLE sipHash64Keyed_test
ENGINE = Memory()
AS
SELECT
1 a,
'test' b
;
//------------------------- ΠΠ°ΠΏΡΠΎΡ 3 -------------------------
SELECT
*
FROM
sipHash64Keyed_test
FORMAT TSVWithNamesAndTypes
;
//------------------------- ΠΠ°ΠΏΡΠΎΡ 4 -------------------------
SELECT
sipHash64Keyed((toUInt64(0), toUInt64(0)), 1, 'test')
FORMAT TSVWithNamesAndTypes
;
//------------------------- ΠΠ°ΠΏΡΠΎΡ 5 -------------------------
SELECT
sipHash64(tuple(*))
FROM
sipHash64Keyed_test
FORMAT TSVWithNamesAndTypes
;
//------------------------- ΠΠ°ΠΏΡΠΎΡ 6 -------------------------
SELECT
sipHash64Keyed((toUInt64(0), toUInt64(0)), tuple(*))
FROM
sipHash64Keyed_test
FORMAT TSVWithNamesAndTypes

| https://github.com/ClickHouse/ClickHouse/issues/50311 | https://github.com/ClickHouse/ClickHouse/pull/53525 | 5395a68baab921b71a93d7759666ed7d37beba5c | b8912203e4b73770f5442881f91331f736d42d20 | "2023-05-29T09:43:03Z" | c++ | "2023-08-18T12:14:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,289 | ["src/Common/Config/ConfigProcessor.cpp", "src/Common/Config/ConfigProcessor.h"] | how i can use udf function xml file like test_function.xml and graphite_rollup.xml used together | when i add a test_function.xml on /etc/clickhouse-server/,the content is :
`<functions>
<function>
<type>executable</type>
<name>test_function_python</name>
<return_type>String</return_type>
<argument>
<type>UInt64</type>
<name>value</name>
</argument>
<format>TabSeparated</format>
<command>test_function.py</command>
</function>
</functions>`
and it can use select test_function_python(toUInt(2)) no problem,but when i add a graphite_rollup.xml file on /etc/clickhouse-server/conf.d,the content is :
`<?xml version="1.0" ?>
<yandex>
<graphite_rollup>
<path_column_name>tags</path_column_name>
<time_column_name>ts</time_column_name>
<value_column_name>val</value_column_name>
<version_column_name>updated</version_column_name>
<default>
<function>avg</function>
<retention>
<age>0</age>
<precision>10</precision>
</retention>
<retention>
<age>86400</age>
<precision>30</precision>
</retention>
<retention>
<age>172800</age>
<precision>300</precision>
</retention>
</default>
</graphite_rollup>`
and something went wrong , I saw the log:
`Merging configuration file '/etc/clickhouse-server/conf.d/graphite_rollup.xml'.
Processing configuration file '/etc/clickhouse-server/test_function.xml'.`
printing non-stopοΌIt looks like these two file configurations are incompatibleοΌhow I can fix? | https://github.com/ClickHouse/ClickHouse/issues/50289 | https://github.com/ClickHouse/ClickHouse/pull/52770 | 91e67c105f6a39d08cd982d252a6a3b034ad4f4a | d8a55b25c00103faf55e768e38646142c223e32f | "2023-05-28T15:46:56Z" | c++ | "2023-07-30T04:21:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,284 | ["tests/queries/0_stateless/02911_cte_invalid_query_analysis.reference", "tests/queries/0_stateless/02911_cte_invalid_query_analysis.sql"] | LOGICAL_ERROR: Bad cast from type DB::ColumnVector<char8_t> to DB::ColumnVector<signed char> | **Describe what's wrong**
The SELECT statement should output results normally, but it threw an exception.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1.1 (commit 3e6314675c6467bc4dd78f659bac862f7e9648f8)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/4996cfc2-e407-4c7e-b9e9-de67e040bbc9
_Set up database_
```sql
create table t0 (pkey UInt32, c1 UInt32, primary key(pkey)) engine = MergeTree;
create table t1 (vkey UInt32, primary key(vkey)) engine = MergeTree;
create table t3 (c17 String, primary key(c17)) engine = MergeTree;
insert into t1 values (3);
```
_Bug-triggering query_
```sql
WITH
cte_1 AS (select
subq_1.c_5_c1698_16 as c_2_c1702_3,
subq_1.c_5_c1694_12 as c_2_c1703_4
from
(select
covarPop(-0, 74) as c_5_c1686_4,
sumWithOverflow(0) as c_5_c1694_12,
covarPop(-53.64, 92.63) as c_5_c1698_16
from
t3 as ref_8
group by ref_8.c17) as subq_1)
select
ref_15.c_2_c1703_4 as c_2_c1723_6,
ref_15.c_2_c1702_3 as c_2_c1724_7
from
t0 as ref_14
RIGHT outer join cte_1 as ref_15
on (ref_14.c1 = ref_15.c_2_c1702_3)
RIGHT outer join t1 as ref_16
on (ref_14.pkey = ref_16.vkey);
```
**Expected behavior**
The SELECT statement is executed without any error.
**Actual behavior**
It throws an exception:
```
Received exception from server (version 23.4.2):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: There's no column 'subq_1.c_5_c1698_16' in table 'subq_1': While processing subq_1.c_5_c1698_16 AS c_2_c1702_3. (UNKNOWN_IDENTIFIER)
(query: WITH
cte_1 AS (select
subq_1.c_5_c1698_16 as c_2_c1702_3,
subq_1.c_5_c1694_12 as c_2_c1703_4
from
(select
covarPop(-0, 74) as c_5_c1686_4,
sumWithOverflow(0) as c_2_c1703_4,
covarPop(-53.64, 92.63) as c_2_c1702_3
from
t3 as ref_8
group by ref_8.c17) as subq_1)
select
ref_15.c_2_c1703_4 as c_2_c1723_6,
ref_15.c_2_c1702_3 as c_2_c1724_7
from
t0 as ref_14
RIGHT outer join cte_1 as ref_15
on (ref_14.c1 = ref_15.c_2_c1702_3)
RIGHT outer join t1 as ref_16
on (ref_14.pkey = ref_16.vkey);)
```
**Additional context**
The earliest reproducible version is 21 in fiddle: https://fiddle.clickhouse.com/cdaec019-a5f5-4c1b-8424-bf8c5662b435
| https://github.com/ClickHouse/ClickHouse/issues/50284 | https://github.com/ClickHouse/ClickHouse/pull/56517 | 0898cf3e06f844ea77c88b6e1153586794a67de6 | 8fabe2116414cd714aab59c5d22106e18da36cd3 | "2023-05-27T14:16:35Z" | c++ | "2023-11-10T11:27:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,283 | ["src/Functions/FunctionsComparison.h", "tests/queries/0_stateless/02769_nan_equality_comparison.reference", "tests/queries/0_stateless/02769_nan_equality_comparison.sql"] | Wrong result of SELECT statement with compile_expressions in recent commits (affected version: 21-23.5.1) | **Describe what's wrong**
The SELECT statement with contradictory WHERE conditions should return empty results. However, it outputs one row.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1.1 (commit 3e6314675c6467bc4dd78f659bac862f7e9648f8)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/7b30dc3b-efea-4b23-937c-6fcb19893bbc
_Set up database_
```sql
create table t1 (c6 UInt32, c7 UInt32, primary key(c6)) engine = MergeTree;
insert into t1 values (76, 57);
```
_bug-triggering query_
```sql
select
c_5_c1470_1 as c_2_c1479_2
from
(select
stddevSamp(ref_10.c6) as c_5_c1470_1
from
t1 as ref_10) as subq_1
where ((subq_1.c_5_c1470_1 = subq_1.c_5_c1470_1)
and (not (subq_1.c_5_c1470_1 = subq_1.c_5_c1470_1)));
```
**Expected behavior**
The query must output empty because the conditions `(subq_1.c_5_c1470_1 = subq_1.c_5_c1470_1)` and `(not (subq_1.c_5_c1470_1 = subq_1.c_5_c1470_1))` are contradictory.
**Actual behavior**
It outputs one row
```
+-------------+
| c_2_c1479_2 |
+-------------+
| nan |
+-------------+
```
**Additional context**
1. At the first several times, the query output an empty result. But after several tries, the results become incorrect, similar as #50039
2. This bug still exists after the fix for https://github.com/ClickHouse/ClickHouse/issues/50039
3. if `set compile_expressions = 0`, the bug disappears, so it should be also related to compile_expressions.
4. The earliest reproducible version is 21 in fiddle: https://fiddle.clickhouse.com/617e1e6d-8e5c-41a7-acca-31c8bff38e04, which is different from #50269 (only head version).
5. Before version 21, the query outputs empty (e.g. 21.12.4.1-alpine: https://fiddle.clickhouse.com/c16460b5-3011-4fa9-b641-c0266a227c14).
| https://github.com/ClickHouse/ClickHouse/issues/50283 | https://github.com/ClickHouse/ClickHouse/pull/50287 | 3a3cee586a6f69ff23b5bdb85d39ae86ad290a62 | e1d535c890279ac5de4cc5bf44c38b223505c6ee | "2023-05-27T13:04:06Z" | c++ | "2023-05-28T22:56:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,269 | ["src/Functions/FunctionsComparison.h", "tests/queries/0_stateless/02769_nan_equality_comparison.reference", "tests/queries/0_stateless/02769_nan_equality_comparison.sql"] | Wrong result of SELECT statement with compile_expressions in recent commits (head version) | **Describe what's wrong**
The two semantically-equivalent SELECT statements should output the same results, but they did not.
**Does it reproduce on recent release?**
It cannot be reproduced in the latest version, but it can be reproduced in the head version.
**How to reproduce**
Version: 23.5.1.1 (commit 3e6314675c6467bc4dd78f659bac862f7e9648f8)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/184cce9d-1e87-4666-afc1-97d281cc11cd
_Set up database_
```sql
create table t1 (pkey UInt32, c4 UInt32, c5 Float32, primary key(pkey)) engine = MergeTree;
insert into t1 values (12000, 36, 77.94);
```
_SELECT statement 1_
```sql
select
subq_1.c_4_c3362_6 as c_1_c3371_2
from
(select
corr(ref_0.c4,ref_0.c4) over w0 as c_4_c3362_6
from
t1 as ref_0
window w0 as (partition by ref_0.c5 order by ref_0.pkey asc)
) as subq_1
where not (not (subq_1.c_4_c3362_6 <> subq_1.c_4_c3362_6));
```
As `not (not (subq_1.c_4_c3362_6 <> subq_1.c_4_c3362_6))` can be replaced with `subq_1.c_4_c3362_6 <> subq_1.c_4_c3362_6`, I get the semantically-equivalent SELECT statement:
_SELECT statement 2_
```sql
select
subq_1.c_4_c3362_6 as c_1_c3371_2
from
(select
corr(ref_0.c4,ref_0.c4) over w0 as c_4_c3362_6
from
t1 as ref_0
window w0 as (partition by ref_0.c5 order by ref_0.pkey asc)
) as subq_1
where subq_1.c_4_c3362_6 <> subq_1.c_4_c3362_6;
```
**Expected behavior**
The two SELECT statements output the same results.
**Actual behavior**
They are different.
SELECT statement 1 outputs:
```
+-------------+
| c_1_c3371_2 |
+-------------+
| nan |
+-------------+
```
SELECT statement 2 outputs an empty result.
**Additional context**
1. This bug can be triggered only in the head version, so it should be involved by recent commits.
2. At the first several times, the SELECT statement 1 outputs an empty result. But after several tries, the results become incorrect, similar to #50039.
3. if `set compile_expressions = 0`, the bug disappears, so it should be also related to compile_expressions.
| https://github.com/ClickHouse/ClickHouse/issues/50269 | https://github.com/ClickHouse/ClickHouse/pull/50287 | 3a3cee586a6f69ff23b5bdb85d39ae86ad290a62 | e1d535c890279ac5de4cc5bf44c38b223505c6ee | "2023-05-26T14:16:29Z" | c++ | "2023-05-28T22:56:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,253 | ["docs/en/operations/server-configuration-parameters/settings.md"] | Document `max_partition_size_to_drop` | Max table size to drop is documented here: https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#max-table-size-to-drop, but `max_partition_size_to_drop` is missing. | https://github.com/ClickHouse/ClickHouse/issues/50253 | https://github.com/ClickHouse/ClickHouse/pull/50256 | 6554e7438ee9ca2418f7717178b70648c34b10f3 | 9da34f2aa66a520879c0eaeda20e452203598a00 | "2023-05-25T20:39:55Z" | c++ | "2023-05-27T00:46:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,239 | ["src/Processors/QueryPlan/PartsSplitter.cpp", "src/Processors/QueryPlan/PartsSplitter.h", "src/Processors/QueryPlan/ReadFromMergeTree.cpp", "tests/queries/0_stateless/01861_explain_pipeline.reference", "tests/queries/0_stateless/02780_final_streams_data_skipping_index.reference", "tests/queries/0_stateless/02791_final_block_structure_mismatch_bug.reference", "tests/queries/0_stateless/02791_final_block_structure_mismatch_bug.sql"] | DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different number of columns | **Describe the problem**
Setting a table with a PK/ORDER BY using functions in a column like toYYYYMMDD/toDate/toDateTime .... to downsample and allow a less granular ordering creates some problems in queries. If using `do_not_merge_across_partitions_select_final` queries with FINAL do not work.
**How to reproduce**
```sql
SET do_not_merge_across_partitions_select_final=1;
CREATE TABLE test_block_mismatch
(
a UInt32,
b DateTime
)
ENGINE = ReplacingMergeTree
PARTITION BY toYYYYMM(b)
ORDER BY (toDate(b), a)
-- Insert a = 1 in partition 1
INSERT INTO test_block_mismatch VALUES
(1, toDateTime('2023-01-01 12:12:12'));
INSERT INTO test_block_mismatch VALUES
(1, toDateTime('2023-01-01 12:12:12'));
SELECT count(*) FROM test_block_mismatch FINAL;
-- Insert a = 1 into partition 2
INSERT INTO test_block_mismatch VALUES
(1, toDateTime('2023-02-02 12:12:12'));
INSERT INTO test_block_mismatch VALUES
(1, toDateTime('2023-02-02 12:12:12'));
SELECT count(*) FROM test_block_mismatch FINAL;
-- Insert a = 2 into partition 1
INSERT INTO test_block_mismatch VALUES
(2, toDateTime('2023-01-01 12:12:12'));
INSERT INTO test_block_mismatch VALUES
(2, toDateTime('2023-01-01 12:12:12'));
SELECT count(*) FROM test_block_mismatch FINAL;
```
* Which ClickHouse server version to use
Happens on 22.7+
* Stack trace
```
2023.05.25 17:33:04.956590 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Debug> executeQuery: (from [::ffff:127.0.0.1]:52202, user: admin) SELECT count(*) FROM test_block_mismatch FINAL; (stage: Complete)
2023.05.25 17:33:04.956726 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> ContextAccess (admin): Access granted: SELECT(a) ON tests.test_block_mismatch
2023.05.25 17:33:04.956742 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2023.05.25 17:33:04.956784 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Debug> tests.test_block_mismatch (6f7cfa9d-a244-4f19-9f10-b4b851393612) (SelectExecutor): Key condition: unknown
2023.05.25 17:33:04.956814 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Debug> tests.test_block_mismatch (6f7cfa9d-a244-4f19-9f10-b4b851393612) (SelectExecutor): MinMax index condition: unknown
2023.05.25 17:33:04.957075 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Debug> tests.test_block_mismatch (6f7cfa9d-a244-4f19-9f10-b4b851393612) (SelectExecutor): Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges
2023.05.25 17:33:04.957113 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202301_1_1_0, approx. 1 rows starting from 0
2023.05.25 17:33:04.957118 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202301_2_2_0, approx. 1 rows starting from 0
2023.05.25 17:33:04.957348 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202301_5_5_0, approx. 1 rows starting from 0
2023.05.25 17:33:04.957372 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202301_6_6_0, approx. 1 rows starting from 0
2023.05.25 17:33:04.957579 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202302_3_3_0, approx. 1 rows starting from 0
2023.05.25 17:33:04.957600 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 202302_4_4_0, approx. 1 rows starting from 0
2023.05.25 17:33:04.957812 [ 2200 ] {08844603-45d8-47f0-84ee-c1dc48cf2754} <Error> executeQuery: Code: 49. DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different number of columns:
a UInt32 UInt32(size = 0), b DateTime UInt32(size = 0), toDate(b) Date UInt16(size = 0)
a UInt32 UInt32(size = 0), b DateTime UInt32(size = 0), toDate(b) Date UInt16(size = 0), toDate(b) Date UInt16(size = 0). (LOGICAL_ERROR) (version 23.3.2.37 (official build)) (from [::ffff:127.0.0.1]:52202) (in query: SELECT count(*) FROM test_block_mismatch FINAL;), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe1e20b5 in /usr/bin/clickhouse
1. ? @ 0x127ce58c in /usr/bin/clickhouse
2. ? @ 0x127cbfdb in /usr/bin/clickhouse
3. DB::Pipe::unitePipes(std::vector<DB::Pipe, std::allocator<DB::Pipe>>, std::vector<std::shared_ptr<DB::IProcessor>, std::allocator<std::shared_ptr<DB::IProcessor>>>*, bool) @ 0x12a06a6d in /usr/bin/clickhouse
4. DB::ReadFromMergeTree::spreadMarkRangesAmongStreamsFinal(DB::RangesInDataParts&&, unsigned long, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::ActionsDAG>&) @ 0x14dbfe30 in /usr/bin/clickhouse
5. DB::ReadFromMergeTree::spreadMarkRanges(DB::RangesInDataParts&&, unsigned long, DB::ReadFromMergeTree::AnalysisResult&, std::shared_ptr<DB::ActionsDAG>&) @ 0x14dc7c7e in /usr/bin/clickhouse
6. DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x14dc940f in /usr/bin/clickhouse
7. DB::ISourceStep::updatePipeline(std::vector<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>, std::allocator<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x14d8cd6c in /usr/bin/clickhouse
8. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x14da52e9 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::execute() @ 0x138d49f8 in /usr/bin/clickhouse
10. ? @ 0x13bf9ec7 in /usr/bin/clickhouse
11. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x13bf746d in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x149c994c in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x149df159 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0x17919874 in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0x1791aa9b in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x17aa2327 in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x17a9fd5d in /usr/bin/clickhouse
18. ? @ 0x7f9783894b43 in ?
19. ? @ 0x7f9783926a00 in ?
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 23.3.2):
Code: 49. DB::Exception: Received from laptop:9440. DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different number of columns:
a UInt32 UInt32(size = 0), b DateTime UInt32(size = 0), toDate(b) Date UInt16(size = 0)
a UInt32 UInt32(size = 0), b DateTime UInt32(size = 0), toDate(b) Date UInt16(size = 0), toDate(b) Date UInt16(size = 0). (LOGICAL_ERROR)
```
| https://github.com/ClickHouse/ClickHouse/issues/50239 | https://github.com/ClickHouse/ClickHouse/pull/51492 | e4dd603919d2e5e98a9a2360e9d88984026a889b | 3a48a7b8727d32b8ccbbf8c27ed12eccee4e2fad | "2023-05-25T15:37:12Z" | c++ | "2023-07-08T05:05:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,198 | ["tests/queries/0_stateless/02769_compare_functions_nan.reference", "tests/queries/0_stateless/02769_compare_functions_nan.sql"] | Wrong result of SELECT statement with compile_expressions | **Describe what's wrong**
The SELECT statement with contradictory WHERE conditions should return empty results. However, it outputs one row.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1.1 (commit 3e6314675c6467bc4dd78f659bac862f7e9648f8)
Easy reproduce in ClickHouse fiddle: https://fiddle.clickhouse.com/87303f4e-a945-4df1-90bc-e6d87e811b4c
_Set up database_
```sql
create table t1 (c3 Float32, c4 Float32, primary key(c3)) engine = MergeTree;
insert into t1 values (-10.75, 95.57);
```
_bug-triggering query_
```sql
select *
from
(select
corr(ref_0.c3, ref_0.c3) as c_5_c102_4
from
t1 as ref_0
group by ref_0.c4) as subq_0
LEFT ANTI join t1 as ref_1
on (subq_0.c_5_c102_4 = ref_1.c3)
where (ref_1.c3 >= ref_1.c3) and (not (ref_1.c3 >= ref_1.c3));
```
**Expected behavior**
The query must output empty because the conditions `(ref_1.c3 >= ref_1.c3)` and `(not (ref_1.c3 >= ref_1.c3))` are contradictory.
**Actual behavior**
It outputs one row
```
+------------+-----+----+
| c_5_c102_4 | c3 | c4 |
+------------+-----+----+
| nan | nan | 0 |
+------------+-----+----+
```
**Additional context**
1. At the first several times, the query output an empty result. But after several tries, the results become incorrect, similar as #50039
1. This bug still exists after the fix for https://github.com/ClickHouse/ClickHouse/issues/50039
2. if `set compile_expressions = 0`, the bug disappears, so it should be also related to compile_expressions.
| https://github.com/ClickHouse/ClickHouse/issues/50198 | https://github.com/ClickHouse/ClickHouse/pull/50366 | 4d4112ff536f819514973dfd0cb8274cf044bb3e | 2efebee5a37b25b26898705ecd833a2c3091eaf9 | "2023-05-24T15:49:24Z" | c++ | "2023-05-31T13:24:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,183 | ["src/Interpreters/Context.cpp", "src/Interpreters/Context.h", "src/Processors/QueryPlan/Optimizations/optimizeUseAggregateProjection.cpp", "src/Processors/QueryPlan/Optimizations/optimizeUseNormalProjection.cpp", "src/Processors/QueryPlan/ReadFromMergeTree.cpp", "src/Processors/QueryPlan/ReadFromPreparedSource.cpp", "src/Processors/QueryPlan/ReadFromPreparedSource.h", "tests/queries/0_stateless/01710_query_log_with_projection_info.reference", "tests/queries/0_stateless/01710_query_log_with_projection_info.sql"] | Missing projection QueryAccessInfo when query_plan_optimize_projection = true | **Describe the unexpected behaviour**
When `query_plan_optimize_projection = true`, there is no projection information in query_log.
This is because we don't store used projection in `query_info` anymore. We need to find a place to store the name of used projection inside query plan.
P.S. we should also add the info here https://github.com/ClickHouse/ClickHouse/blob/master/src/Planner/PlannerJoinTree.cpp#L733
| https://github.com/ClickHouse/ClickHouse/issues/50183 | https://github.com/ClickHouse/ClickHouse/pull/52327 | 2e67a8927b256546881baee7c823ecb6ee918198 | 04462333d26181a03562098a1f0c6b5ee290b421 | "2023-05-24T12:45:28Z" | c++ | "2023-07-22T14:50:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,099 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/TreeOptimizer.cpp", "src/Interpreters/TreeOptimizer.h", "src/Interpreters/TreeRewriter.cpp", "tests/queries/0_stateless/02751_multiif_to_if_crash.reference", "tests/queries/0_stateless/02751_multiif_to_if_crash.sql"] | Segmentation fault on simple query with nested SELECT on multiif | When running this query on Clickhouse server version 23.4.2.11 (and previous) 23.*:
`SELECT sum(A) FROM (SELECT multiIf(1, 1, NULL) as A);`
We get a segmentation fault on Clickhouse server.
The following (small) variations do work correctly though.
```
SELECT sum(multiIf(1, 1, NULL));
SELECT A FROM (SELECT multiIf(1, 1, NULL) as A);
SELECT sum(A) FROM (SELECT 1 as A);
SELECT sum(A) FROM (SELECT multiIf(true, 1, NULL) as A);
SELECT sum(A) FROM (SELECT multiIf(1, 1, 2) as A);
```
| https://github.com/ClickHouse/ClickHouse/issues/50099 | https://github.com/ClickHouse/ClickHouse/pull/50123 | 91bc0fad1bb932cd875ca4aa5ceda5f97584ea42 | 3a955661da7cfc514189b0049e9b9c3af9ceb7d1 | "2023-05-22T12:45:56Z" | c++ | "2023-05-23T12:29:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,093 | ["src/Interpreters/Context.cpp", "src/Interpreters/Context.h", "src/Processors/QueryPlan/Optimizations/optimizeUseAggregateProjection.cpp", "src/Processors/QueryPlan/Optimizations/optimizeUseNormalProjection.cpp", "src/Processors/QueryPlan/ReadFromMergeTree.cpp", "src/Processors/QueryPlan/ReadFromPreparedSource.cpp", "src/Processors/QueryPlan/ReadFromPreparedSource.h", "tests/queries/0_stateless/01710_query_log_with_projection_info.reference", "tests/queries/0_stateless/01710_query_log_with_projection_info.sql"] | Example of projection from documentations is not work | ClickHouse v.23.3.2.37
Trying to work with projections example from ClickHouse documentation, but its not work.
Link to example from doc:
https://clickhouse.com/docs/en/sql-reference/statements/alter/projection#example-filtering-without-using-primary-keys
code:
```
CREATE TABLE visits_order
(
`user_id` UInt64,
`user_name` String,
`pages_visited` Nullable(Float64),
`user_agent` String
)
ENGINE = MergeTree()
PRIMARY KEY user_agent;
ALTER TABLE visits_order ADD PROJECTION user_name_projection (SELECT * ORDER BY user_name)
ALTER TABLE visits_order MATERIALIZE PROJECTION user_name_projection;
INSERT INTO visits_order SELECT
number,
'test',
1.5 * (number / 2),
'Android'
FROM numbers(1, 100);
SELECT
*
FROM visits_order
WHERE user_name='test'
LIMIT 2;
SELECT query, projections FROM system.query_log WHERE query_id='<query_id>'
```
So, when i try to check projections column from query_log, i see nothing ([])
What i do wrong? | https://github.com/ClickHouse/ClickHouse/issues/50093 | https://github.com/ClickHouse/ClickHouse/pull/52327 | 2e67a8927b256546881baee7c823ecb6ee918198 | 04462333d26181a03562098a1f0c6b5ee290b421 | "2023-05-22T11:07:18Z" | c++ | "2023-07-22T14:50:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,039 | ["src/Functions/FunctionsComparison.h", "tests/queries/0_stateless/02763_jit_compare_functions_nan.reference", "tests/queries/0_stateless/02763_jit_compare_functions_nan.sql"] | Wrong result of the boolean expression with JIT (`compile_expressions`) while comparing NaNs | **Describe what's wrong**
The SELECT statement with contradictory WHERE conditions should return empty results. However, it outputs one row.
**Does it reproduce on recent release?**
It can be reproduced in the latest version.
**How to reproduce**
Version: 23.5.1.1 (commit 30464b939781e3aa0897acf9a38839760a2282f8)
_Set up database_
```sql
create table t3 (pkey UInt32, primary key(pkey)) engine = MergeTree;
create table t5 (pkey UInt32, primary key(pkey)) engine = MergeTree;
insert into t3 (pkey) values (2);
insert into t5 (pkey) values (2);
```
_bug-triggering query_
```sql
select *
from
t5 as ref_0
RIGHT join t3 as ref_3
on (ref_0.pkey = ref_3.pkey)
where (acos(ref_3.pkey) <> atan(ref_0.pkey)) and
(not (acos(ref_3.pkey) <> atan(ref_0.pkey)));
```
**Expected behavior**
The query must output empty because the conditions `acos(ref_3.pkey) <> atan(ref_0.pkey)` and `not (acos(ref_3.pkey) <> atan(ref_0.pkey))` are contradictory.
**Actual behavior**
It outputs one row
```
ββpkeyββ¬βref_3.pkeyββ
β 2 β 2 β
ββββββββ΄βββββββββββββ
1 row in set. Elapsed: 0.003 sec.
```
**Additional context**
The earliest reproducible version is 21.7 in [fiddle](https://fiddle.clickhouse.com/)..
| https://github.com/ClickHouse/ClickHouse/issues/50039 | https://github.com/ClickHouse/ClickHouse/pull/50056 | bb91e3ac2e7112fffeef71bfa000f2b862910259 | dc4cb5223b736d0d92a27cc7d1f098f3c593304d | "2023-05-19T22:11:58Z" | c++ | "2023-05-22T22:45:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 50,016 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "src/Interpreters/Context.cpp", "src/Interpreters/Context.h"] | Analyzer: JOIN with the same table function creates 2 identical storages | With new analyzer enabled in such query:
```
SELECT
sum(a.id) as aid,
sum(b.id) as bid
FROM file('tmp.csv') AS a
INNER JOIN file('tmp.csv') AS b
ON a.text = b.text
```
we execute table function `file('tmp.csv')` twice, as a result we have 2 identical storages created. But we should create it only once and reuse. | https://github.com/ClickHouse/ClickHouse/issues/50016 | https://github.com/ClickHouse/ClickHouse/pull/50105 | f4c73e94d21c6de0b1af7da3c42c2db6bf97fc73 | a05088ab731f1e625ce5197829f59b765c94474f | "2023-05-19T09:07:01Z" | c++ | "2023-05-23T08:16:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,939 | ["src/DataTypes/DataTypeArray.cpp", "src/DataTypes/DataTypeArray.h", "src/Functions/FunctionBinaryArithmetic.h", "tests/queries/0_stateless/02812_pointwise_array_operations.reference", "tests/queries/0_stateless/02812_pointwise_array_operations.sql"] | Support pointwise operations on arrays | Currently to do pointwise operations on arrays we have to use ArrayMap. This is unnecessarily verbose e.g.
```
SELECT url, caption,
L2Distance(array_column,
arrayMap((x,y) -> x+y,
arrayMap((x,y) -> x-y, [1,2,3,4], [1,2,3,4]),
arrayMap((x,y) -> x+y, [5,6,7,4], [1,2,3,4])
)
) AS score FROM laion_10m ORDER BY score ASC LIMIT 10
```
Propose that + and - on arrays perform pointwise operations. Currently not supported.
```
clickhouse-cloud :) select [1,2]+[3,4]
SELECT [1, 2] + [3, 4]
Query id: dbf44020-0866-4a29-ad58-7a8b8221fa2a
0 rows in set. Elapsed: 0.220 sec.
Received exception from server (version 23.3.1):
Code: 43. DB::Exception: Received from cbupclfpbv.us-east-2.aws.clickhouse-staging.com:9440. DB::Exception: Illegal types Array(UInt8) and Array(UInt8) of arguments of function plus: While processing [1, 2] + [3, 4]. (ILLEGAL_TYPE_OF_ARGUMENT)
```
Usecase is ClickHouse for vector operations. | https://github.com/ClickHouse/ClickHouse/issues/49939 | https://github.com/ClickHouse/ClickHouse/pull/52625 | 3f915491f029ef030dc3d4777e5f60a3abf52822 | 9476344dc4e57eb3f96b7bf7595f274435a99394 | "2023-05-17T08:24:44Z" | c++ | "2023-08-08T09:58:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,938 | ["src/DataTypes/DataTypeDateTime64.h", "tests/queries/0_stateless/02785_summing_merge_tree_datetime64.reference", "tests/queries/0_stateless/02785_summing_merge_tree_datetime64.sql"] | SummingMergeTree support for DateTime64 | **Describe the unexpected behaviour**
When using a `DateTime64` in a `SummingMergeTree` on a column that is not part of the sorting key or a summarizable field. You get the following error on insert time:
```
Received exception from server (version 23.5.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type DateTime64(3) of argument for aggregate function sumWithOverflow. (ILLEGAL_TYPE_OF_ARGUMENT)
(query: INSERT INTO summ.landing SELECT 1 pk, now64(3) timestamp, 1 value;)
```
This was discovered in CH 22.8 but is also happening in master.
**How to reproduce**
To reproduce the issue:
```sql
DROP DATABASE IF EXISTS summ;
CREATE DATABASE summ;
CREATE TABLE summ.landing ( `pk` UInt64, `timestamp` DateTime64(3), `value` UInt64)
ENGINE = SummingMergeTree() ORDER BY pk;
INSERT INTO summ.landing SELECT 1 pk, now64(3) timestamp, 1 value;
INSERT INTO summ.landing SELECT 1 pk, now64(3) timestamp, 2 value;
INSERT INTO summ.landing SELECT 1 pk, now64(3) timestamp, 3 value;
INSERT INTO summ.landing SELECT 1 pk, now64(3) timestamp, 4 value;
INSERT INTO summ.landing SELECT 1 pk, now64(3) timestamp, 5 value;
SELECT * FROM summ.landing FINAL FORMAT PrettyCompact;
```
**Expected behavior**
I'd expect the DateTime64 column to behave in a similar way to how DateTime behaves, the column is not summarized, and an arbitrary value gets selected.
See:
```sql
DROP DATABASE IF EXISTS summ;
CREATE DATABASE summ;
CREATE TABLE summ.landing ( `pk` UInt64, `timestamp` DateTime, `value` UInt64)
ENGINE = SummingMergeTree() ORDER BY pk;
INSERT INTO summ.landing SELECT 1 pk, now() timestamp, 1 value;
INSERT INTO summ.landing SELECT 1 pk, now() timestamp, 2 value;
INSERT INTO summ.landing SELECT 1 pk, now() timestamp, 3 value;
INSERT INTO summ.landing SELECT 1 pk, now() timestamp, 4 value;
INSERT INTO summ.landing SELECT 1 pk, now() timestamp, 5 value;
SELECT * FROM summ.landing FINAL FORMAT PrettyCompact;
```
=>
```
ββpkββ¬βββββββββββtimestampββ¬βvalueββ
β 1 β 2023-05-17 09:21:08 β 15 β
ββββββ΄ββββββββββββββββββββββ΄ββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/49938 | https://github.com/ClickHouse/ClickHouse/pull/50797 | f39c28bae7cfd5c5d3978644829d0f0203ed265d | 0349315143b57db22032ab049ddbebcf269a9020 | "2023-05-17T07:22:17Z" | c++ | "2023-06-20T20:02:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,927 | ["programs/server/config.xml", "src/IO/Resource/DynamicResourceManager.cpp", "tests/config/config.d/custom_settings_prefixes.xml", "tests/queries/0_stateless/02737_sql_auto_is_null.reference", "tests/queries/0_stateless/02737_sql_auto_is_null.sql"] | MySQL compatibility: SQL_AUTO_IS_NULL setting | **Use case**
BI tools like Tableau send the following command to the server during the initial connection:
```
SET SQL_AUTO_IS_NULL = 0
```
**Describe the solution you'd like**
According to the [docs](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_auto_is_null), the default value in MySQL should be already 0. One of the possible workarounds here is to just no-op since we don't want to do the transformation of the queries containing `IS NULL` statements.
**Additional context**
Full stack trace:
```
2023.05.16 15:55:52.164882 [ 358 ] {} <Trace> MySQLHandler: Sent handshake
2023.05.16 15:55:52.207020 [ 358 ] {} <Trace> MySQLHandler: payload size: 91
2023.05.16 15:55:52.207102 [ 358 ] {} <Trace> MySQLHandler: Capabilities: 431923853, max_packet_size: 1073741824, character_set: 255, user: default, auth_response length: 20, database: default, auth_plugin_name: mysql_native_password
2023.05.16 15:55:52.207145 [ 358 ] {} <Debug> MYSQL-Session: f71fb593-329a-4fcb-96ab-f96fcfb77e30 Authenticating user 'default' from 34.246.62.141:36697
2023.05.16 15:55:52.207237 [ 358 ] {} <Debug> MYSQL-Session: f71fb593-329a-4fcb-96ab-f96fcfb77e30 Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2023.05.16 15:55:52.207345 [ 358 ] {} <Debug> MySQLHandler: Authentication for user default succeeded.
2023.05.16 15:55:52.207363 [ 358 ] {} <Debug> MYSQL-Session: f71fb593-329a-4fcb-96ab-f96fcfb77e30 Creating session context with user_id: 94309d50-4f52-5250-31bd-74fecac179db
2023.05.16 15:55:52.207488 [ 358 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2023.05.16 15:55:52.207530 [ 358 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, UNDROP TABLE, TRUNCATE, OPTIMIZE, BACKUP, KILL QUERY, KILL TRANSACTION, MOVE PARTITION BETWEEN SHARDS, ACCESS MANAGEMENT, SYSTEM, dictGet, INTROSPECTION, SOURCES, CLUSTER ON *.* WITH GRANT OPTION
2023.05.16 15:55:52.207547 [ 358 ] {} <Trace> ContextAccess (default): List of all grants including implicit: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, UNDROP TABLE, TRUNCATE, OPTIMIZE, BACKUP, KILL QUERY, KILL TRANSACTION, MOVE PARTITION BETWEEN SHARDS, ACCESS MANAGEMENT, SYSTEM, dictGet, INTROSPECTION, SOURCES, CLUSTER ON *.* WITH GRANT OPTION
2023.05.16 15:55:52.207602 [ 358 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2023.05.16 15:55:52.207629 [ 358 ] {} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, UNDROP TABLE, TRUNCATE, OPTIMIZE, BACKUP, KILL QUERY, KILL TRANSACTION, MOVE PARTITION BETWEEN SHARDS, ACCESS MANAGEMENT, SYSTEM, dictGet, INTROSPECTION, SOURCES, CLUSTER ON *.* WITH GRANT OPTION
2023.05.16 15:55:52.207646 [ 358 ] {} <Trace> ContextAccess (default): List of all grants including implicit: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, UNDROP TABLE, TRUNCATE, OPTIMIZE, BACKUP, KILL QUERY, KILL TRANSACTION, MOVE PARTITION BETWEEN SHARDS, ACCESS MANAGEMENT, SYSTEM, dictGet, INTROSPECTION, SOURCES, CLUSTER ON *.* WITH GRANT OPTION
2023.05.16 15:55:52.249759 [ 358 ] {} <Debug> MySQLHandler: Received command: 3. Connection id: 340.
2023.05.16 15:55:52.291866 [ 358 ] {} <Debug> MySQLHandler: Received command: 3. Connection id: 340.
2023.05.16 15:55:52.333917 [ 358 ] {} <Debug> MySQLHandler: Received command: 3. Connection id: 340.
2023.05.16 15:55:52.334031 [ 358 ] {} <Debug> MYSQL-Session: f71fb593-329a-4fcb-96ab-f96fcfb77e30 Creating query context from session context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: default
2023.05.16 15:55:52.334253 [ 358 ] {mysql:340:cbc24c7d-19e7-4431-9a15-c4986b55bb39} <Debug> executeQuery: (from 34.246.62.141:36697) SET SQL_AUTO_IS_NULL = 0 (stage: Complete)
2023.05.16 15:55:52.334940 [ 358 ] {mysql:340:cbc24c7d-19e7-4431-9a15-c4986b55bb39} <Error> executeQuery: Code: 115. DB::Exception: Unknown setting SQL_AUTO_IS_NULL. (UNKNOWN_SETTING) (version 23.4.1.1157 (official build)) (from 34.246.62.141:36697) (in query: SET SQL_AUTO_IS_NULL = 0), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe31fa95 in /usr/bin/clickhouse
1. ? @ 0x8acc5a4 in /usr/bin/clickhouse
2. DB::BaseSettingsHelpers::throwSettingNotFound(std::basic_string_view<char, std::char_traits<char>>) @ 0x12973908 in /usr/bin/clickhouse
3. ? @ 0x1270de80 in /usr/bin/clickhouse
4. DB::SettingsConstraints::checkImpl(DB::Settings const&, DB::SettingChange&, DB::SettingsConstraints::ReactionOnViolation) const @ 0x12837c22 in /usr/bin/clickhouse
5. DB::InterpreterSetQuery::execute() @ 0x13aa8350 in /usr/bin/clickhouse
6. ? @ 0x13dc01e7 in /usr/bin/clickhouse
7. ? @ 0x13dc612c in /usr/bin/clickhouse
8. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x14b68d45 in /usr/bin/clickhouse
9. DB::MySQLHandler::run() @ 0x14b65754 in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x17aeb0d4 in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x17aec2fb in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x17c6a7a7 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x17c681dd in /usr/bin/clickhouse
14. ? @ 0x7f3f3b701609 in ?
15. __clone @ 0x7f3f3b626133 in ?
2023.05.16 15:55:52.335105 [ 358 ] {mysql:340:cbc24c7d-19e7-4431-9a15-c4986b55bb39} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2023.05.16 15:55:52.335238 [ 358 ] {} <Error> MySQLHandler: MySQLHandler: Cannot read packet: : Code: 115. DB::Exception: Unknown setting SQL_AUTO_IS_NULL. (UNKNOWN_SETTING), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe31fa95 in /usr/bin/clickhouse
1. ? @ 0x8acc5a4 in /usr/bin/clickhouse
2. DB::BaseSettingsHelpers::throwSettingNotFound(std::basic_string_view<char, std::char_traits<char>>) @ 0x12973908 in /usr/bin/clickhouse
3. ? @ 0x1270de80 in /usr/bin/clickhouse
4. DB::SettingsConstraints::checkImpl(DB::Settings const&, DB::SettingChange&, DB::SettingsConstraints::ReactionOnViolation) const @ 0x12837c22 in /usr/bin/clickhouse
5. DB::InterpreterSetQuery::execute() @ 0x13aa8350 in /usr/bin/clickhouse
6. ? @ 0x13dc01e7 in /usr/bin/clickhouse
7. ? @ 0x13dc612c in /usr/bin/clickhouse
8. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x14b68d45 in /usr/bin/clickhouse
9. DB::MySQLHandler::run() @ 0x14b65754 in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x17aeb0d4 in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x17aec2fb in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x17c6a7a7 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x17c681dd in /usr/bin/clickhouse
14. ? @ 0x7f3f3b701609 in ?
15. __clone @ 0x7f3f3b626133 in ?
(version 23.4.1.1157 (official build))
2023.05.16 15:55:52.377362 [ 358 ] {} <Debug> MySQLHandler: Received command: 1. Connection id: 340.
2023.05.16 15:55:53.000387 [ 287 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 1.58 GiB, peak 11.37 GiB, free memory in arenas 64.58 MiB, will set to 1.56 GiB (RSS), difference: -23.69 MiB
2023.05.16 15:55:53.086353 [ 358 ] {} <Trace> MySQLHandlerFactory: MySQL connection. Id: 341. Address: 34.246.62.141:64644
2023.05.16 15:55:53.086661 [ 358 ] {} <Trace> MySQLHandler: Sent handshake
2023.05.16 15:55:53.128771 [ 358 ] {} <Trace> MySQLHandler: payload size: 91
2023.05.16 15:55:53.128859 [ 358 ] {} <Trace> MySQLHandler: Capabilities: 431923853, max_packet_size: 1073741824, character_set: 255, user: default, auth_response length: 20, database: default, auth_plugin_name: mysql_native_password
2023.05.16 15:55:53.128938 [ 358 ] {} <Debug> MYSQL-Session: b7fd9b69-bd60-4afb-99c5-e39368a1ab13 Authenticating user 'default' from 34.246.62.141:64644
2023.05.16 15:55:53.129007 [ 358 ] {} <Debug> MYSQL-Session: b7fd9b69-bd60-4afb-99c5-e39368a1ab13 Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2023.05.16 15:55:53.129026 [ 358 ] {} <Debug> MySQLHandler: Authentication for user default succeeded.
2023.05.16 15:55:53.129038 [ 358 ] {} <Debug> MYSQL-Session: b7fd9b69-bd60-4afb-99c5-e39368a1ab13 Creating session context with user_id: 94309d50-4f52-5250-31bd-74fecac179db
2023.05.16 15:55:53.171211 [ 358 ] {} <Debug> MySQLHandler: Received command: 3. Connection id: 341.
2023.05.16 15:55:53.213336 [ 358 ] {} <Debug> MySQLHandler: Received command: 3. Connection id: 341.
2023.05.16 15:55:53.255402 [ 358 ] {} <Debug> MySQLHandler: Received command: 3. Connection id: 341.
2023.05.16 15:55:53.255510 [ 358 ] {} <Debug> MYSQL-Session: b7fd9b69-bd60-4afb-99c5-e39368a1ab13 Creating query context from session context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: default
2023.05.16 15:55:53.255728 [ 358 ] {mysql:341:a59fc5bc-fd1d-41be-9c1c-224b6f870fcd} <Debug> executeQuery: (from 34.246.62.141:64644) SET SQL_AUTO_IS_NULL = 0 (stage: Complete)
2023.05.16 15:55:53.256409 [ 358 ] {mysql:341:a59fc5bc-fd1d-41be-9c1c-224b6f870fcd} <Error> executeQuery: Code: 115. DB::Exception: Unknown setting SQL_AUTO_IS_NULL. (UNKNOWN_SETTING) (version 23.4.1.1157 (official build)) (from 34.246.62.141:64644) (in query: SET SQL_AUTO_IS_NULL = 0), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe31fa95 in /usr/bin/clickhouse
1. ? @ 0x8acc5a4 in /usr/bin/clickhouse
2. DB::BaseSettingsHelpers::throwSettingNotFound(std::basic_string_view<char, std::char_traits<char>>) @ 0x12973908 in /usr/bin/clickhouse
3. ? @ 0x1270de80 in /usr/bin/clickhouse
4. DB::SettingsConstraints::checkImpl(DB::Settings const&, DB::SettingChange&, DB::SettingsConstraints::ReactionOnViolation) const @ 0x12837c22 in /usr/bin/clickhouse
5. DB::InterpreterSetQuery::execute() @ 0x13aa8350 in /usr/bin/clickhouse
6. ? @ 0x13dc01e7 in /usr/bin/clickhouse
7. ? @ 0x13dc612c in /usr/bin/clickhouse
8. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x14b68d45 in /usr/bin/clickhouse
9. DB::MySQLHandler::run() @ 0x14b65754 in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x17aeb0d4 in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x17aec2fb in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x17c6a7a7 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x17c681dd in /usr/bin/clickhouse
14. ? @ 0x7f3f3b701609 in ?
15. __clone @ 0x7f3f3b626133 in ?
2023.05.16 15:55:53.256568 [ 358 ] {mysql:341:a59fc5bc-fd1d-41be-9c1c-224b6f870fcd} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2023.05.16 15:55:53.256634 [ 358 ] {} <Error> MySQLHandler: MySQLHandler: Cannot read packet: : Code: 115. DB::Exception: Unknown setting SQL_AUTO_IS_NULL. (UNKNOWN_SETTING), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe31fa95 in /usr/bin/clickhouse
1. ? @ 0x8acc5a4 in /usr/bin/clickhouse
2. DB::BaseSettingsHelpers::throwSettingNotFound(std::basic_string_view<char, std::char_traits<char>>) @ 0x12973908 in /usr/bin/clickhouse
3. ? @ 0x1270de80 in /usr/bin/clickhouse
4. DB::SettingsConstraints::checkImpl(DB::Settings const&, DB::SettingChange&, DB::SettingsConstraints::ReactionOnViolation) const @ 0x12837c22 in /usr/bin/clickhouse
5. DB::InterpreterSetQuery::execute() @ 0x13aa8350 in /usr/bin/clickhouse
6. ? @ 0x13dc01e7 in /usr/bin/clickhouse
7. ? @ 0x13dc612c in /usr/bin/clickhouse
8. DB::MySQLHandler::comQuery(DB::ReadBuffer&) @ 0x14b68d45 in /usr/bin/clickhouse
9. DB::MySQLHandler::run() @ 0x14b65754 in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x17aeb0d4 in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x17aec2fb in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x17c6a7a7 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x17c681dd in /usr/bin/clickhouse
14. ? @ 0x7f3f3b701609 in ?
15. __clone @ 0x7f3f3b626133 in ?
(version 23.4.1.1157 (official build))
```
| https://github.com/ClickHouse/ClickHouse/issues/49927 | https://github.com/ClickHouse/ClickHouse/pull/50013 | 2323542e47a02ab476b4dc1d4bb1830c53695f07 | 2c8c412835f1799e410c585ac370ce0ebabf7c7a | "2023-05-16T16:21:47Z" | c++ | "2023-05-19T23:04:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,924 | ["src/Formats/ProtobufSerializer.cpp", "tests/queries/0_stateless/02751_protobuf_ipv6.reference", "tests/queries/0_stateless/02751_protobuf_ipv6.sh", "tests/queries/0_stateless/format_schemas/02751_protobuf_ipv6.proto"] | Unable to import IPv6 addresses using Protobuf | **Describe the issue**
Since version 23, it is not possible to directly import IPv6 addresses from Protobuf (as bytes). This was working fine with 22.3.
**How to reproduce**
In `/var/lib/clickhouse/format_schemas/flows.proto`:
```proto
syntax = "proto3";
message FlowMessage {
bytes ExporterAddress = 3;
}
```
Create a table:
```sql
CREATE TABLE default.flows
(
`ExporterAddress` IPv6
)
ENGINE = MergeTree
ORDER BY ExporterAddress
SETTINGS index_granularity = 8192
```
Insert using Protobuf format:
```console
$ echo <intentionally removed - could contain sensitive user information> | xxd -r -p | clickhouse clickhouse-client -q "INSERT INTO flows SETTINGS format_schema='/var/lib/clickhouse/format_schemas/flows.proto:FlowMessage' FORMAT Protobuf"
Code: 676. DB::ParsingException: Cannot parse IPv6 : While executing ProtobufRowInputFormat: data for INSERT was parsed from stdin: (in query: INSERT INTO flows SETTINGS format_schema='/var/lib/clickhouse/format_schemas/flows.proto:FlowMessage' FORMAT Protobuf): (in file/uri (fd = 0)): (at row 1)
. (CANNOT_PARSE_IPV6)
```
When using 22.3, there is no error:
```
369034ef06d4 :) select * from flows
SELECT *
FROM flows
Query id: b31771a4-b1a6-442e-a9eb-84bc7b944036
ββExporterAddressβββββββ
β ::ffff:192.168.0.254 β
ββββββββββββββββββββββββ
1 row in set. Elapsed: 0.002 sec.
```
This is still reproducible with master. | https://github.com/ClickHouse/ClickHouse/issues/49924 | https://github.com/ClickHouse/ClickHouse/pull/49933 | f98c337d2f9580fa65c8d21de447ab6e8fe3d781 | a2c3de50820996549e975a0aceeb398ccc2897b5 | "2023-05-16T15:25:27Z" | c++ | "2023-05-19T03:02:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,913 | ["src/Storages/MergeTree/DataPartsExchange.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "tests/integration/test_projection_report_broken_part/__init__.py", "tests/integration/test_projection_report_broken_part/configs/testkeeper.xml", "tests/integration/test_projection_report_broken_part/test.py"] | Unexpected part name: <projection name> | **INPUTS**
- `ReplicatedMergeTree` table projections (works fine)
- CH ver `23.1.6.42`
- system.mutations for `MATERIALIZE projection` has `is_done` equals to `0` for unexists parts
**Unexpected behaviour**
Hanging MATERIALIZE projection MUTATION in system.mutations for unexists parts
**Expected behavior**
0 system.mutations.
**Additional info**
1. create particular table projections
2. MATERIALIZE them
3. wait a bit and check system.mutations
```sql
select mutation_id, command, is_done, parts_to_do, parts_to_do_names, latest_fail_reason from system.mutations where is_done=0
ββmutation_idββ¬βcommandββββββββββββββββββββββββββββββββββββββββββββ¬βis_doneββ¬βparts_to_doββ¬βparts_to_do_namesββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βlatest_fail_reasonββ
β 0000000005 β MATERIALIZE PROJECTION device_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000006 β MATERIALIZE PROJECTION users_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000007 β MATERIALIZE PROJECTION messages_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000008 β MATERIALIZE PROJECTION overlay_session_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
βββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββ
```
4. check CH tasks/processes
```sql
SELECT * FROM system.processes WHERE query not like 'SELECT%' LIMIT 10 FORMAT Vertical
Ok.
0 rows in set. Elapsed: 0.003 sec.
```
5. get part info for stucked parts
```sql
select partition, name, marks, rows, modification_time from system.parts where name IN('b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1')
Ok.
0 rows in set. Elapsed: 0.010 sec. Processed 3.64 thousand rows, 324.88 KB (357.75 thousand rows/s., 31.97 MB/s.)
```
6. check if there any parts in this partition
```sql
select partition, name, marks, rows, modification_time from system.parts where partition_id IN('b0d8617590929340bbb5fb58cf52bce4', 'b9186c325b4cfa4ca55dcba0e78fd4a3')
ββpartitionβββ¬βnameββββββββββββββββββββββββββββββββββββββββββββββ¬βmarksββ¬βββrowsββ¬βββmodification_timeββ
β 20230410PM β b0d8617590929340bbb5fb58cf52bce4_0_11_2_75 β 75 β 592459 β 2023-05-16 09:50:37 β
β 20230410PM β b0d8617590929340bbb5fb58cf52bce4_12_17_1_75 β 52 β 409492 β 2023-05-16 09:50:37 β
β 20230410PM β b0d8617590929340bbb5fb58cf52bce4_18_59_7_75 β 70 β 561446 β 2023-05-16 09:50:37 β
β 20230410PM β b0d8617590929340bbb5fb58cf52bce4_60_64_1 β 32 β 250081 β 2023-05-11 14:17:26 β
β 20230410PM β b0d8617590929340bbb5fb58cf52bce4_65_65_0 β 2 β 377 β 2023-05-11 14:17:25 β
β 20230410PM β b0d8617590929340bbb5fb58cf52bce4_66_66_0_75 β 2 β 159 β 2023-05-16 09:50:37 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_0_0_0 β 2 β 7950 β 2023-05-12 06:59:15 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_1_1_0 β 2 β 2 β 2023-05-12 07:19:39 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_2_2_0 β 2 β 3 β 2023-05-12 07:19:50 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_3_3_0 β 2 β 1 β 2023-05-12 07:19:50 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_4_4_0 β 2 β 3 β 2023-05-12 07:20:00 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_5_5_0 β 2 β 2 β 2023-05-12 07:20:04 β
β 20230512AM β b9186c325b4cfa4ca55dcba0e78fd4a3_6_3855_763_3860 β 2 β 7493 β 2023-05-16 09:50:40 β
ββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββ΄βββββββββ΄ββββββββββββββββββββββ
```
7. materialize device_agregation projection once again and grab some parts to check whether they are exists or not
```sql
ALTER TABLE analytics_local MATERIALIZE projection device_agregation
select parts_to_do, parts_to_do_names from system.mutations where is_done=0
-- 1864, '1e8ea56ef92907db5039c205765fda33_17_17_0_26','1ebc42839fc25ba80c59f25c930876bd_0_0_0_9','1eeded7867995ae3bf35178ebae6a49a_0_0_0_9',...
select partition, name, marks, rows, modification_time from system.parts where name IN('1e8ea56ef92907db5039c205765fda33_17_17_0_26','1ebc42839fc25ba80c59f25c930876bd_0_0_0_9')
ββpartitionβββ¬βnameβββββββββββββββββββββββββββββββββββββββββ¬βmarksββ¬βrowsββ¬βββmodification_timeββ
β 20220307AM β 1e8ea56ef92907db5039c205765fda33_17_17_0_26 β 2 β 10 β 2023-05-16 09:49:30 β
β 20210714AM β 1ebc42839fc25ba80c59f25c930876bd_0_0_0_9 β 2 β 1830 β 2023-05-16 09:49:30 β
ββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββββββββββ
```
8. wait a bit and check `system.mutations`
```sql
select mutation_id, command, is_done, parts_to_do, parts_to_do_names, latest_fail_reason from system.mutations where is_done=0
ββmutation_idββ¬βcommandββββββββββββββββββββββββββββββββββββββββββββ¬βis_doneββ¬βparts_to_doββ¬βparts_to_do_namesββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βlatest_fail_reasonββ
β 0000000005 β MATERIALIZE PROJECTION device_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000006 β MATERIALIZE PROJECTION users_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000007 β MATERIALIZE PROJECTION messages_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000008 β MATERIALIZE PROJECTION overlay_session_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
β 0000000009 β MATERIALIZE PROJECTION device_agregation β 0 β 2 β ['b0d8617590929340bbb5fb58cf52bce4_60_65_2','b9186c325b4cfa4ca55dcba0e78fd4a3_0_5_1'] β β
βββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββ΄ββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββ
```
```sql
SELECT version()
ββversion()ββ
β 23.1.6.42 β
βββββββββββββ
```
** Work Around **
As we dont have any procs/tasks related to particular mutations we can just kill mutations
```sql
KILL MUTATION ON CLUSTER '{cluster}' WHERE mutation_id IN(select mutation_id from system.mutations where is_done=0)
``` | https://github.com/ClickHouse/ClickHouse/issues/49913 | https://github.com/ClickHouse/ClickHouse/pull/50052 | 8dbf7beb32fa752bc4b87decc741221ae2e9249c | c89f92e1f67271f295cb4a44de12b2916ff393cd | "2023-05-16T11:34:43Z" | c++ | "2023-05-22T11:20:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,885 | ["src/AggregateFunctions/AggregateFunctionIf.h", "tests/performance/uniqExactIf.xml"] | Parallelise merging of `UniqExactIf` states | Original question from user: https://github.com/ClickHouse/ClickHouse/discussions/49865#discussioncomment-5904525
We need to repeat what was done for `uniqExact`, i.e. provide [methods](https://github.com/nickitat/ClickHouse/blob/master/src/AggregateFunctions/AggregateFunctionUniq.h#L416-L424) for `AggregateFunctionIf` that is used for parallel merging, that could just call corresponding methods on the nested function. | https://github.com/ClickHouse/ClickHouse/issues/49885 | https://github.com/ClickHouse/ClickHouse/pull/50285 | 12ca383132f183ac6dda002a15d86c3a601d4dd4 | 3cc9feafc2a99359512fd31752a9379e09f8cbe1 | "2023-05-15T14:53:27Z" | c++ | "2023-05-29T16:42:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,878 | ["docs/en/development/build.md", "docs/en/getting-started/install.md", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | [ARM] Request to Specify ClickHouse Version during Installation | RPM packages only support modern ARM cores, such as Graviton 2, 3.
For older ARM cores, install ClickHouse as follows:
```
curl https://clickhouse.com/ | sh
```
_Originally posted by @alexey-milovidov in https://github.com/ClickHouse/ClickHouse/issues/49814#issuecomment-1546781571_
Dear alexey,
I hope this message finds you well. I recently installed ClickHouse 23.5.1 following the instructions provided, and I'm grateful for the guidance and support provided by the project.
I wanted to inquire about the possibility of specifying a particular version of ClickHouse during the installation process. Specifically, I am interested in installing version 23.2.4.12. I have reviewed the available documentation and configuration options, but I couldn't find a clear way to specify the desired version.
Could you please guide me on how to install a specific version of ClickHouse? It would be immensely helpful if you could provide instructions or point me to the relevant documentation or resources.
Thank you for your attention to this matter, and I appreciate the effort you have put into maintaining and improving ClickHouse. Your contributions have been invaluable to the community.
Best regards,
zenghuasheng
| https://github.com/ClickHouse/ClickHouse/issues/49878 | https://github.com/ClickHouse/ClickHouse/pull/50359 | ff5884989f238d02ea0392ea061d40938bde40a9 | 3bc4d11b46b2be73e7a5752714382fdcf359da88 | "2023-05-15T10:20:26Z" | c++ | "2023-05-31T00:32:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,781 | ["src/Functions/array/FunctionArrayMapped.h", "tests/queries/0_stateless/02735_array_map_array_of_tuples.reference", "tests/queries/0_stateless/02735_array_map_array_of_tuples.sql"] | arrayMap unwraps tuples with a single element |
**Describe the unexpected behaviour**
```
SELECT arrayMap(e -> e, [tuple(NULL)])
Query id: 0acdde56-2f1e-43ea-8de5-b69a84465f15
ββarrayMap(lambda(tuple(e), e), [tuple(NULL)])ββ
β [NULL] β
ββββββββββββββββββββββββββββββββββββββββββββββββ
```
**How to reproduce**
* clickhouse version 23.4.2.1, it was returning tuple in 23.3
**Expected behavior**
ArrayMap with identity lamba should return an array of tuples, not an array of the tuple value.
| https://github.com/ClickHouse/ClickHouse/issues/49781 | https://github.com/ClickHouse/ClickHouse/pull/49789 | a3e26dd22f746db3a2fa9ec308365b1dde99f1fe | 3351ef739812f918eda28c2a0ca75d725d0e33e4 | "2023-05-11T11:45:57Z" | c++ | "2023-05-12T11:27:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,769 | ["programs/client/Client.cpp", "programs/local/LocalServer.cpp", "tests/queries/0_stateless/01523_client_local_queries_file_parameter.reference", "tests/queries/0_stateless/01523_client_local_queries_file_parameter.sh", "tests/queries/0_stateless/02751_multiquery_with_argument.reference", "tests/queries/0_stateless/02751_multiquery_with_argument.sh"] | clickhouse-client: Passing `--query` and `--queries-file` works in non-oblivious way | Combining `clickhouse-client` executes queries from file but does not exectute query in --query
**How to reproduce**
1. Create file sql.sql with string `SELECT 'File`
2. `clickhouse-local --query "select 1" --queries-file "sql.sql"`
**Actual behavior**
Client prints `File`, `Select 1` is ignored
**Expected behavior**
An exception that usage of both options `--query (-q)` and `--queries-file` is prohibited | https://github.com/ClickHouse/ClickHouse/issues/49769 | https://github.com/ClickHouse/ClickHouse/pull/50210 | a97b180ff36d50c6a2231b04f7cb10d7b274ac0f | 73fb2081c1699e0312c45a6c6b14c8c77297ecf0 | "2023-05-11T08:11:57Z" | c++ | "2023-06-01T12:16:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,708 | ["src/Storages/StorageFile.cpp", "src/TableFunctions/TableFunctionFile.cpp", "tests/queries/0_stateless/02946_parallel_replicas_distributed.reference", "tests/queries/0_stateless/02946_parallel_replicas_distributed.sql"] | allow_experimental_parallel_reading_from_replicas multplies results | ```sql
CREATE TABLE test (id UInt64, date Date)
ENGINE = MergeTree
ORDER BY id
as select *, today() from numbers(100);
CREATE TABLE IF NOT EXISTS test_d as test
ENGINE = Distributed(test_cluster_one_shard_three_replicas_localhost, currentDatabase(),test);
SELECT count(), sum(id)
FROM test_d
SETTINGS allow_experimental_parallel_reading_from_replicas = 1, max_parallel_replicas = 3, prefer_localhost_replica = 0;
ββcount()ββ¬βsum(id)ββ
β 300 β 14850 β
βββββββββββ΄ββββββββββ
SELECT count(), sum(id)
FROM test_d
ββcount()ββ¬βsum(id)ββ
β 100 β 4950 β
βββββββββββ΄ββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/49708 | https://github.com/ClickHouse/ClickHouse/pull/57979 | cc8f6ab8090888c8dc92655a70e2f2d6a5502ce7 | 1a7de9158fd3b3f1a3f477af6a6533f46c3dc045 | "2023-05-09T17:54:36Z" | c++ | "2023-12-18T20:29:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,694 | ["src/Common/tests/gtest_thread_pool_schedule_exception.cpp"] | Failed unit test `ThreadPool.ExceptionFromWait` | Link https://s3.amazonaws.com/clickhouse-test-reports/49676/7250e7fec863286b010484dd0a13a0812168ff60/unit_tests__ubsan_.html | https://github.com/ClickHouse/ClickHouse/issues/49694 | https://github.com/ClickHouse/ClickHouse/pull/49755 | 94bfb1171ded4033f813e4d13c2d36d12a351bbf | d147cb105c9d805bd98bd1d7090b3b57297a93a8 | "2023-05-09T11:05:26Z" | c++ | "2023-05-12T11:08:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,685 | ["src/Processors/QueryPlan/ReadFromMergeTree.cpp", "tests/queries/0_stateless/02814_ReplacingMergeTree_fix_select_final_on_single_partition.reference", "tests/queries/0_stateless/02814_ReplacingMergeTree_fix_select_final_on_single_partition.sql"] | Setting do_not_merge_across_partitions_select_final = 1 does not filter deleted rows in ReplacingMergeTree | We are using Clickhouse 23.3.2 version and we have a few `ReplacingMergeTree` tables defined like below:
```
CREATE TABLE subscriptions_replacing
(
`subscription_id` UInt64,
`account_id` UInt64,
...
`_is_deleted` UInt8,
`_version` UInt64
)
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/{cluster}/tables/{database}/{table}', '{replica}', _version, _is_deleted)
PRIMARY KEY (account_id, subscription_id)
ORDER BY (account_id, subscription_id)
SETTINGS min_age_to_force_merge_seconds = 300, min_age_to_force_merge_on_partition_only = 1, index_granularity = 8192
```
When we are querying with `FINAL`, we get different results if we add the setting `do_not_merge_across_partitions_select_final = 1`. It seems like it skips filtering the deleted rows:
```
$ select count(), _is_deleted from subscriptions_replacing group by _is_deleted
ββcount()ββ¬β_is_deletedββ
β 1096219 β 0 β
β 264359 β 1 β
βββββββββββ΄ββββββββββββββ
2 rows in set. Elapsed: 0.010 sec. Processed 1.36 million rows, 1.36 MB (139.59 million rows/s., 139.59 MB/s.)
$ select count() from subscriptions_replacing final
ββcount()ββ
β 1096219 β
βββββββββββ
1 row in set. Elapsed: 0.066 sec. Processed 1.37 million rows, 34.22 MB (20.85 million rows/s., 521.13 MB/s.)
$ select count() from subscriptions_replacing final SETTINGS do_not_merge_across_partitions_select_final = 1
ββcount()ββ
β 1360578 β
βββββββββββ
1 row in set. Elapsed: 0.019 sec. Processed 1.36 million rows, 34.01 MB (72.04 million rows/s., 1.80 GB/s.)
$ select count() from subscriptions_replacing final where _is_deleted=0 SETTINGS do_not_merge_across_partitions_select_final = 1
ββcount()ββ
β 1096219 β
βββββββββββ
1 row in set. Elapsed: 0.021 sec. Processed 1.36 million rows, 34.01 MB (65.00 million rows/s., 1.62 GB/s.)
```
Unfortunately I couldn't reproduce in a local setup.
Is it expected that this setting affects the latest behaviour of `ReplacingMergeTrees` which filters the deleted rows, or is this a bug? | https://github.com/ClickHouse/ClickHouse/issues/49685 | https://github.com/ClickHouse/ClickHouse/pull/53511 | cba1d6688253d102103cbc1e737cfee806a99abc | b7d8ebf5e238e3c91bcda6b74a671784ce8181af | "2023-05-09T06:10:41Z" | c++ | "2023-08-28T13:07:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,672 | ["src/Common/Volnitsky.h", "src/Common/format.h", "src/Functions/randomStringUTF8.cpp", "tests/queries/0_stateless/01278_random_string_utf8.reference", "tests/queries/0_stateless/01278_random_string_utf8.sql"] | AST Fuzzer (msan): use-of-uninitialized-value in StringSearcher | [A link to the report
](https://s3.amazonaws.com/clickhouse-test-reports/49636/fe02317d4595ddcea9837dc5a33f047242653642/fuzzer_astfuzzermsan/report.html)
Stacks:
https://pastila.nl/?030ef1b1/7e71a0d51ee89c61ce8ca97856c021e2
https://pastila.nl/?0269eed6/0c3c7e9b0b6c6db8cbc7dbeed06ce055
https://pastila.nl/?0269eed6/2d09fe39393369b364c2d304993d12cc
Look like something related to string searcher
CC @rschu1ze | https://github.com/ClickHouse/ClickHouse/issues/49672 | https://github.com/ClickHouse/ClickHouse/pull/49750 | 23fd9937a3f38e57559e82f4a090672d79446665 | 8997c6ef953cf96e6ed9e966d1ef034c19e1ec3d | "2023-05-08T16:40:39Z" | c++ | "2023-05-11T16:49:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,622 | ["src/Common/HashTable/ClearableHashSet.h", "tests/queries/0_stateless/02733_fix_distinct_in_order_bug_49622.reference", "tests/queries/0_stateless/02733_fix_distinct_in_order_bug_49622.sql"] | Unexpected "DISTINCT" result with empty string | + ClickHouse Version
`23.4.2.11`
+ DDL
```sql
CREATE TABLE test
(
c1 String,
c2 String,
c3 String
)
ENGINE = ReplacingMergeTree
ORDER BY (c1, c3);
INSERT INTO test(c1, c2, c3) VALUES
('', '', '1'), ('', '', '2'),('v1', 'v2', '3'),('v1', 'v2', '4'),('v1', 'v2', '5');
```
```sql
SELECT c1, c2, c3 FROM test GROUP BY c1, c2, c3;
ββc1ββ¬βc2ββ¬βc3ββ
β β β 2 β
β v1 β v2 β 4 β
β v1 β v2 β 3 β
β β β 1 β
β v1 β v2 β 5 β
ββββββ΄βββββ΄βββββ
SELECT DISTINCT c1, c2, c3 FROM test;
ββc1ββ¬βc2ββ¬βc3ββ
β β β 1 β
β v1 β v2 β 3 β
β v1 β v2 β 4 β
β v1 β v2 β 5 β
ββββββ΄βββββ΄βββββ
```
Why does the query result of this DISTINCT clause not contain rows where c3 is 2
| https://github.com/ClickHouse/ClickHouse/issues/49622 | https://github.com/ClickHouse/ClickHouse/pull/49636 | 4b3b9f6ba614a5d605553cf312a7d155f16cfe49 | 8bc04d5fa8f395ea10d9f96a244e54c49fe72c94 | "2023-05-07T13:02:58Z" | c++ | "2023-05-08T16:42:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,569 | ["src/Storages/MergeTree/MergeTreeIndexSet.cpp", "tests/queries/0_stateless/02789_set_index_nullable_condition_bug.reference", "tests/queries/0_stateless/02789_set_index_nullable_condition_bug.sql"] | Creating Index changes the query result, all rows are skipped | Clickhouse v22.8.17.17+
I expect to get 1 row, but I get an empty result
[https://fiddle.clickhouse.com/75841829-0825-4e49-878c-4e0ceba63bb0](https://fiddle.clickhouse.com/75841829-0825-4e49-878c-4e0ceba63bb0)
```
CREATE OR REPLACE TABLE test_table
(
col1 String,
col2 String,
INDEX test_table_col2_idx col2 TYPE set(0) GRANULARITY 1
) ENGINE = MergeTree()
ORDER BY col1
AS SELECT 'v1', 'v2';
-- empty result
SELECT * FROM test_table
WHERE 1 == 1 AND col1 == col1 OR
0 AND col2 == NULL;
ALTER TABLE test_table DROP INDEX test_table_col2_idx;
-- 1 row in result
SELECT *
FROM test_table
WHERE (1 == 1 AND col1 = 'v1' OR
0 AND col2 == NULL);
``` | https://github.com/ClickHouse/ClickHouse/issues/49569 | https://github.com/ClickHouse/ClickHouse/pull/51205 | 32e671d0c6e96b16c7160fba07e4e3d833d68371 | b10bcbc7f41d361fa4b7948b389dbbd4642c51e6 | "2023-05-05T18:29:29Z" | c++ | "2023-06-26T14:33:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,566 | ["docker/bare/README.md"] | docker container logs to /var/log/clickhouse-server instead only to /dev/stdout | (you don't have to strictly follow this form)
**Describe the unexpected behaviour**
The docker container logs to /var/log/clickhouse-server instead only to /dev/stdout.
**How to reproduce**
Create a clickhouse installation with Clickhouse operator.
Thats the output of on of the clikhouse containers right after start:
```
k -n logging logs -f chi-clickhouse-logging-replicated-1-0-0
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-hostname-ports.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-macros.xml'.
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-zookeeper.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-01-listen.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-02-logger.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-03-query_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/01-clickhouse-04-part_log.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/chop-generated-remote_servers.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/chop-generated-settings.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/custom.xml'.
Logging warning to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
```
```
k -n logging exec -it chi-clickhouse-logging-replicated-0-0-0 -- bash
clickhouse@chi-clickhouse-logging-replicated-0-0-0:/$ ls -al /var/log/clickhouse-server/
total 20
drwxrwxrwx 1 clickhouse clickhouse 4096 May 5 16:43 .
drwxr-xr-x 1 root root 4096 Mar 10 21:41 ..
-rw-r----- 1 clickhouse clickhouse 2735 May 5 16:49 clickhouse-server.err.log
-rw-r----- 1 clickhouse clickhouse 2735 May 5 16:49 clickhouse-server.log
clickhouse@chi-clickhouse-logging-replicated-0-0-0:/$
```
**Expected behavior**
No logs are written to the filesystem of the container.
| https://github.com/ClickHouse/ClickHouse/issues/49566 | https://github.com/ClickHouse/ClickHouse/pull/49605 | c3fa74ab8aa3c1e8723f715572a3b056f53f9e54 | 920b42f88f89361c580a9620191d34930741248f | "2023-05-05T16:51:01Z" | c++ | "2023-05-07T03:40:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,528 | ["src/Analyzer/Passes/UniqToCountPass.cpp", "src/Analyzer/Passes/UniqToCountPass.h", "src/Analyzer/QueryTreePassManager.cpp", "src/Core/Settings.h", "src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/RewriteUniqToCountVisitor.cpp", "src/Interpreters/RewriteUniqToCountVisitor.h", "tests/integration/test_rewrite_uniq_to_count/__init__.py", "tests/integration/test_rewrite_uniq_to_count/test.py"] | Rewrite `uniq` to `count` if a subquery returns distinct values. | ```
SELECT uniq(x) FROM (SELECT DISTINCT x ...)
```
can be rewritten to
```
SELECT count() FROM (SELECT DISTINCT x ...)
```
as well as
```
SELECT uniq() FROM (SELECT x ... GROUP BY x)
```
It can apply to all variants of the `uniq` functions except `uniqUpTo`.
The optimization could be implemented on a query pipeline basis. | https://github.com/ClickHouse/ClickHouse/issues/49528 | https://github.com/ClickHouse/ClickHouse/pull/52004 | bd5d93e4393eee1cb09360433e8aa36c9994e6b4 | 5f767b0dfa6c950af064c9399ec7b2e11fdac2e4 | "2023-05-04T22:25:52Z" | c++ | "2023-07-25T09:41:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,523 | ["src/Dictionaries/DictionaryFactory.cpp", "src/Dictionaries/DictionaryFactory.h", "src/Dictionaries/FlatDictionary.cpp", "src/Dictionaries/getDictionaryConfigurationFromAST.cpp", "tests/queries/0_stateless/01018_ddl_dictionaries_bad_queries.reference", "tests/queries/0_stateless/01018_ddl_dictionaries_bad_queries.sh", "tests/queries/0_stateless/02391_hashed_dictionary_shards.sql", "tests/queries/0_stateless/02731_auto_convert_dictionary_layout_to_complex_by_complex_keys.reference", "tests/queries/0_stateless/02731_auto_convert_dictionary_layout_to_complex_by_complex_keys.sql"] | If a dictionary is created with a complex key, automatically choose the "complex key" layout variant. | null | https://github.com/ClickHouse/ClickHouse/issues/49523 | https://github.com/ClickHouse/ClickHouse/pull/49587 | c825f15b7481f009baa2119d5efae05ed406b49f | 97329981119ac6cc17e8a565df645d560f756193 | "2023-05-04T21:26:57Z" | c++ | "2023-07-31T07:20:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,510 | ["src/Storages/System/attachInformationSchemaTables.cpp", "tests/integration/test_mysql_protocol/golang.reference", "tests/integration/test_mysql_protocol/test.py", "tests/integration/test_select_access_rights/test_from_system_tables.py", "tests/queries/0_stateless/01161_information_schema.reference", "tests/queries/0_stateless/02206_information_schema_show_database.reference"] | MySQL compatibility: mixed case queries | **Use case**
When using MySQL protocol to connect BI tools (for example, QuickSight) to ClickHouse, the following query is executed to introspect the dataset size, and it fails:
```sql
SELECT
data_length
FROM
information_schema.TABLES
WHERE
table_schema = 'default'
AND table_name = 'cell_towers';
```
**Describe the solution you'd like**
Mixed case `<database>.<TABLE>` (or maybe even `<DATABASE>.<table>`) works.
**Additional context**
An example log of a failing query:
```
2023.04.25 13:26:44.562738 [ 319 ] {mysql:96:11ad169d-0139-456a-abc7-e9fdfae004f6} <Error> executeQuery: Code: 60. DB::Exception: Table information_schema.TABLES doesn't exist. (UNKNOWN_TABLE) (version 23.4.1.1157 (official build)) (from 35.158.127.201:43000) (in query: SELECT data_length FROM information_schema.TABLES WHERE table_schema = 'default' AND table_name = 'cell_towers')
```
| https://github.com/ClickHouse/ClickHouse/issues/49510 | https://github.com/ClickHouse/ClickHouse/pull/52695 | 621d8522892e689d4049ae8ab131e30f97137859 | 5d64e036baec6807f1c3c180a2ba110826b94df0 | "2023-05-04T13:20:43Z" | c++ | "2023-08-03T10:15:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,509 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/02352_lightweight_delete_and_object_column.reference", "tests/queries/0_stateless/02352_lightweight_delete_and_object_column.sql"] | error: There is no physical column _row_exists in table |
**Describe the unexpected behaviour**
There is still repeating error message in the log: `There is no physical column _row_exists in table. (NO_SUCH_COLUMN_IN_TABLE)` (see full trace below). The log is full of this messages.
It is impossible for me to find out which table it is.
* Which ClickHouse server version to use
`23.4.2.11` as well as `23.3.1.2823`
**Error message and/or stacktrace**
```
2023.05.04 15:09:45.563899 [ 3411394 ] {12b6cbf7-cfc3-4661-833d-cbc8ede1865a::202309_3_3_0_4} <Error> MutatePlainMergeTreeTask: Code: 16. DB::Exception: There is no physical column _row_exists in table. (NO_SUCH_COLUMN_IN_TABLE) (version 23.4.2.11 (official build))
2023.05.04 15:09:45.564204 [ 3411394 ] {12b6cbf7-cfc3-4661-833d-cbc8ede1865a::202309_3_3_0_4} <Error> virtual bool DB::MutatePlainMergeTreeTask::executeStep(): Code: 16. DB::Exception: There is no physical column _row_exists in table. (NO_SUCH_COLUMN_IN_TABLE), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) in /usr/bin/clickhouse
1. ? in /usr/bin/clickhouse
2. DB::ColumnsDescription::getPhysical(String const&) const in /usr/bin/clickhouse
3. DB::MergeTreeData::checkPartDynamicColumns(std::shared_ptr<DB::IMergeTreeDataPart>&, std::unique_lock<std::mutex>&) const in /usr/bin/clickhouse
4. DB::MergeTreeData::renameTempPartAndReplaceImpl(std::shared_ptr<DB::IMergeTreeDataPart>&, DB::MergeTreeData::Transaction&, std::unique_lock<std::mutex>&, std::vector<std::shared_ptr<DB::IMergeTreeDataPart const>, std::allocator<std::shared_ptr<DB::IMergeTreeDataPart const>>>*) in /usr/bin/clickhouse
5. DB::MergeTreeData::renameTempPartAndReplace(std::shared_ptr<DB::IMergeTreeDataPart>&, DB::MergeTreeData::Transaction&) in /usr/bin/clickhouse
6. DB::MutatePlainMergeTreeTask::executeStep() in /usr/bin/clickhouse
7. DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::routine(std::shared_ptr<DB::TaskRuntimeData>) in /usr/bin/clickhouse
8. DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() in /usr/bin/clickhouse
9. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) in /usr/bin/clickhouse
10. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) in /usr/bin/clickhouse
11. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) in /usr/bin/clickhouse
12. ? in /usr/bin/clickhouse
13. ? in ?
14. clone in ?
(version 23.4.2.11 (official build))
```
| https://github.com/ClickHouse/ClickHouse/issues/49509 | https://github.com/ClickHouse/ClickHouse/pull/49737 | 56a563f2bd95ef43366b6273b4a9b496469a4423 | 7c7565f094404d97de497d448cdd951847b28669 | "2023-05-04T13:19:12Z" | c++ | "2023-05-12T10:20:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,445 | ["src/Interpreters/Cache/QueryCache.cpp", "src/Processors/Chunk.cpp", "src/Processors/Chunk.h"] | Bad cast from type DB::ColumnConst to DB::ColumnVector<char8_t> | https://s3.amazonaws.com/clickhouse-test-reports/49211/a08225d9ea4d0ca6ddf665a1b5b7fc69b2147137/stress_test__debug_.html | https://github.com/ClickHouse/ClickHouse/issues/49445 | https://github.com/ClickHouse/ClickHouse/pull/50704 | 945a119fc6d2b3061a2d4edb19e5c5eef73f6a54 | 305e3d1f660a3508f2060ea0032b56ff963e62ff | "2023-05-03T10:40:55Z" | c++ | "2023-06-23T15:01:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,400 | ["tests/integration/test_ssl_cert_authentication/test.py"] | Flaky test `test_ssl_cert_authentication` | https://s3.amazonaws.com/clickhouse-test-reports/0/0f6a81843fa2b5b86b8a7ac238d18b9fcdf5c657/integration_tests__release__[4_4].html | https://github.com/ClickHouse/ClickHouse/issues/49400 | https://github.com/ClickHouse/ClickHouse/pull/49982 | f850a448ecebe6a097d1f4bf711f50a863652bc0 | f39c81d13edbd84aa46760284bf7c5ecb20212c8 | "2023-05-02T11:43:34Z" | c++ | "2023-05-23T11:57:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,322 | ["src/Columns/ColumnLowCardinality.h", "tests/integration/test_system_metrics/test.py"] | Processing of small blocks with LowCardinality fields may produce a wrong value in the progress bar. | **Describe the unexpected behaviour**
```
CREATE TABLE test
(
s LowCardinality(String)
)
ENGINE = MergeTree ORDER BY ();
INSERT INTO test SELECT randomPrintableASCII(10000) FROM numbers(1000000);
SET preferred_block_size_bytes = '64K'
SELECT blockSize(), count() AS c, sum(length(s)) FROM test GROUP BY ALL ORDER BY c DESC;
```
Shows > 1 TB/sec, which is obviously wrong. | https://github.com/ClickHouse/ClickHouse/issues/49322 | https://github.com/ClickHouse/ClickHouse/pull/49323 | 965956ad55c46e6956b4f034d516e9da3aae2faa | 79ad150454a1e2560bcc051f5f438f0cb5021ef5 | "2023-04-28T22:10:19Z" | c++ | "2023-05-05T20:34:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,312 | ["src/Interpreters/ActionsVisitor.cpp", "src/Interpreters/ActionsVisitor.h", "src/Interpreters/ExpressionAnalyzer.cpp", "src/Processors/QueryPlan/CreatingSetsStep.cpp", "src/Storages/VirtualColumnUtils.cpp", "tests/queries/0_stateless/02596_build_set_and_remote.reference", "tests/queries/0_stateless/02596_build_set_and_remote.sql"] | Abort in `InterpreterSelectQuery::executeSubqueriesInSetsAndJoins` due to invalid std::promise | https://s3.amazonaws.com/clickhouse-test-reports/45596/5915290f0fe9ed8aaa5344aec85390b874bd949b/fuzzer_astfuzzerubsan/report.html
```
std::exception. Code: 1001, type: std::__1::future_error, e.what() = The associated promise has been destructed prior to the associated state becoming ready. (version 23.4.1.1) (from [::ffff:127.0.0.1]:50486) (in query: SELECT 1000.0001, toUInt64(arrayJoin([NULL, 257, 65536, NULL])), arrayExists(x -> (x IN (SELECT '2.55', NULL WITH TOTALS)), [-9223372036854775808]) FROM remote('127.0.0.{1,2}', system.one) GROUP BY NULL, NULL, NULL, NULL), Stack trace (when copying this message, always include the lines below):
0. std::exception::capture() @ 0x2af3ebf6 in /workspace/clickhouse
1. ./build_docker/./contrib/llvm-project/libcxx/src/support/runtime/stdexcept_default.ipp:26: std::logic_error::logic_error(std::logic_error const&) @ 0x4962ef3f in /workspace/clickhouse
2. ./build_docker/./contrib/llvm-project/libcxx/include/future:520: std::exception_ptr std::make_exception_ptr[abi:v15000]<std::future_error>(std::future_error) @ 0x40e265f0 in /workspace/clickhouse
3. ./build_docker/./contrib/llvm-project/libcxx/include/future:1351: std::promise<std::shared_ptr<DB::Set>>::~promise() @ 0x40fb14ca in /workspace/clickhouse
4. ./build_docker/./src/Interpreters/PreparedSets.h:55: DB::SubqueryForSet::~SubqueryForSet() @ 0x40fb1296 in /workspace/clickhouse
5. ./build_docker/./contrib/llvm-project/libcxx/include/string:1499: std::__hash_table<std::__hash_value_type<String, DB::SubqueryForSet>, std::__unordered_map_hasher<String, std::__hash_value_type<String, DB::SubqueryForSet>, std::hash<String>, std::equal_to<String>, true>, std::__unordered_map_equal<String, std::__hash_value_type<String, DB::SubqueryForSet>, std::equal_to<String>, std::hash<String>, true>, std::allocator<std::__hash_value_type<String, DB::SubqueryForSet>>>::__deallocate_node(std::__hash_node_base<std::__hash_node<std::__hash_value_type<String, DB::SubqueryForSet>, void*>*>*) @ 0x40fb11f2 in /workspace/clickhouse
6. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:534: DB::addCreatingSetsStep(DB::QueryPlan&, std::shared_ptr<DB::PreparedSets>, std::shared_ptr<DB::Context const>) @ 0x44c75ffe in /workspace/clickhouse
7. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:3092: DB::InterpreterSelectQuery::executeSubqueriesInSetsAndJoins(DB::QueryPlan&) @ 0x42adbb93 in /workspace/clickhouse
8. ./build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x42ac30bb in /workspace/clickhouse
9. ./build_docker/./contrib/llvm-project/libcxx/include/optional:260: DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x42abe50a in /workspace/clickhouse
10. ./build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x42b9fc0c in /workspace/clickhouse
11. ./build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::execute() @ 0x42ba115e in /workspace/clickhouse
12. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x430404e1 in /workspace/clickhouse
13. ./build_docker/./src/Interpreters/executeQuery.cpp:1168: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x4303bb6f in /workspace/clickhouse
14. ./build_docker/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x444c649b in /workspace/clickhouse
15. ./build_docker/./src/Server/TCPHandler.cpp:2045: DB::TCPHandler::run() @ 0x444eb779 in /workspace/clickhouse
16. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x45a2f9de in /workspace/clickhouse
17. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: Poco::Net::TCPServerDispatcher::run() @ 0x45a30a92 in /workspace/clickhouse
18. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x45c9b150 in /workspace/clickhouse
19. ./build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:0: Poco::ThreadImpl::runnableEntry(void*) @ 0x45c967f1 in /workspace/clickhouse
20. ? @ 0x7feb00253609 in ?
21. clone @ 0x7feb00178133 in ?
2023.04.27 12:52:52.394342 [ 504 ] {} <Fatal> BaseDaemon: ########################################
2023.04.27 12:52:52.394409 [ 504 ] {} <Fatal> BaseDaemon: (version 23.4.1.1, build id: 5DC4BFB4030A0B9016CF767588CD1BC7A81EF6DF) (from thread 150) (query_id: 7068c995-c0fe-4163-b966-7df324e6d962) (query: SELECT 1000.0001, toUInt64(arrayJoin([NULL, 257, 65536, NULL])), arrayExists(x -> (x IN (SELECT '2.55', NULL WITH TOTALS)), [-9223372036854775808]) FROM remote('127.0.0.{1,2}', system.one) GROUP BY NULL, NULL, NULL, NULL) Received signal Aborted (6)
2023.04.27 12:52:52.394433 [ 504 ] {} <Fatal> BaseDaemon:
2023.04.27 12:52:52.394459 [ 504 ] {} <Fatal> BaseDaemon: Stack trace: 0x7feb0009c00b 0x7feb0007b859 0x5582772eb123 0x558277309779 0x55827884d9de 0x55827884ea92 0x558278ab9150 0x558278ab47f1 0x7feb00253609 0x7feb00178133
2023.04.27 12:52:52.394485 [ 504 ] {} <Fatal> BaseDaemon: 3. raise @ 0x7feb0009c00b in ?
2023.04.27 12:52:52.394503 [ 504 ] {} <Fatal> BaseDaemon: 4. abort @ 0x7feb0007b859 in ?
2023.04.27 12:52:52.460697 [ 504 ] {} <Fatal> BaseDaemon: 5. ./build_docker/./src/Server/TCPHandler.cpp:558: DB::TCPHandler::runImpl() @ 0x444cd123 in /workspace/clickhouse
2023.04.27 12:52:52.539932 [ 504 ] {} <Fatal> BaseDaemon: 6. ./build_docker/./src/Server/TCPHandler.cpp:2045: DB::TCPHandler::run() @ 0x444eb779 in /workspace/clickhouse
2023.04.27 12:52:52.546036 [ 504 ] {} <Fatal> BaseDaemon: 7. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x45a2f9de in /workspace/clickhouse
2023.04.27 12:52:52.553998 [ 504 ] {} <Fatal> BaseDaemon: 8.1. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: std::unique_ptr<Poco::Net::TCPServerConnection, std::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
2023.04.27 12:52:52.554030 [ 504 ] {} <Fatal> BaseDaemon: 8.2. inlined from ./build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
2023.04.27 12:52:52.554043 [ 504 ] {} <Fatal> BaseDaemon: 8. ./build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x45a30a92 in /workspace/clickhouse
2023.04.27 12:52:52.562263 [ 504 ] {} <Fatal> BaseDaemon: 9. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x45c9b150 in /workspace/clickhouse
2023.04.27 12:52:52.569995 [ 504 ] {} <Fatal> BaseDaemon: 10. ./build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:0: Poco::ThreadImpl::runnableEntry(void*) @ 0x45c967f1 in /workspace/clickhouse
2023.04.27 12:52:52.570022 [ 504 ] {} <Fatal> BaseDaemon: 11. ? @ 0x7feb00253609 in ?
2023.04.27 12:52:52.570040 [ 504 ] {} <Fatal> BaseDaemon: 12. clone @ 0x7feb00178133 in ?
2023.04.27 12:52:52.570065 [ 504 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2023.04.27 12:52:57.102787 [ 141 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
Reproduces since 23.4
Fiddle: https://fiddle.clickhouse.com/e03a998d-cdf7-445d-a18a-7d236b196f9c
```
Received exception from server (version 23.4.1):
Code: 1001. DB::Exception: Received from localhost:9000. DB::Exception: std::__1::future_error: The associated promise has been destructed prior to the associated state becoming ready.. (STD_EXCEPTION)
(query: SELECT 1000.0001, toUInt64(arrayJoin([NULL, 257, 65536, NULL])), arrayExists(x -> (x IN (SELECT '2.55', NULL WITH TOTALS)), [-9223372036854775808]) FROM remote('127.0.0.{1,2}', system.one) GROUP BY NULL, NULL, NULL, NULL;)
```
| https://github.com/ClickHouse/ClickHouse/issues/49312 | https://github.com/ClickHouse/ClickHouse/pull/49425 | 2924837c7e6030058b87b9a65a06276543430cfd | a90c2ec90df2a3be7c10fe0b5f7ba8015a71d47a | "2023-04-28T15:28:04Z" | c++ | "2023-05-03T19:47:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,304 | ["docs/en/operations/backup.md", "src/Backups/RestoreSettings.cpp", "src/Backups/RestoreSettings.h", "src/Backups/RestorerFromBackup.cpp", "src/Backups/RestorerFromBackup.h", "src/Backups/SettingsFieldOptionalString.cpp", "src/Backups/SettingsFieldOptionalString.h", "tests/integration/test_backup_restore_storage_policy/__init__.py", "tests/integration/test_backup_restore_storage_policy/configs/storage_config.xml", "tests/integration/test_backup_restore_storage_policy/test.py"] | RESTORE TABLE db.table SETTINGS ... (use different storage policy for restore of backup) | **Use case**
Currently during backup restore, it's allow you to restore table as is, with source storage policy.
But it's not always desired behavior, for example if you have tiered storage, during RESTORE TABLE, ClickHouse tries to reserve space in storage policy according to volume order only, and not TTL policy. So old parts can land on local disk instead of s3.
In order to avoid pollution of local disk with old data, it would be good to have ability to restore data to another storage policy.
For example, one which will have only s3 disk. (It also make restore much more faster, ie data needs to be copied from s3 to s3)
**Describe the solution you'd like**
RESTORE TABLE db.table SETTINGS storage_policy='s3'; FROM Backup ....
| https://github.com/ClickHouse/ClickHouse/issues/49304 | https://github.com/ClickHouse/ClickHouse/pull/52970 | 9d29b7cdbf65d2df44f7bc909e3c06c7ea9fcb9a | 6af6247f8a57d87e39057fd5516c1e6eb18fc4b7 | "2023-04-28T12:49:35Z" | c++ | "2023-08-07T17:01:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,290 | ["docs/en/sql-reference/table-functions/file.md", "docs/ru/sql-reference/table-functions/file.md", "src/Storages/HDFS/StorageHDFS.cpp", "src/Storages/StorageFile.cpp", "tests/integration/test_storage_hdfs/test.py", "tests/queries/0_stateless/02771_complex_globs_in_storage_file_path.reference", "tests/queries/0_stateless/02771_complex_globs_in_storage_file_path.sql"] | file/(hdfs?) globs allow to have patterns across different directories | **Use case**
We want to read only specific files in each directory, and do not have access to other files.
**Describe the solution you'd like**
```
SELECT
*,
_path,
_file
FROM file('{a/1,b/2}.csv', CSV)
0 rows in set. Elapsed: 0.050 sec.
Received exception:
Code: 636. DB::Exception: Cannot extract table structure from CSV format file, because there are no files with provided path. You must specify table structure manually. (CANNOT_EXTRACT_TABLE_STRUCTURE)
SELECT
*,
_path,
_file
FROM file('a/1.csv', CSV)
ββc1ββ¬βc2ββ¬β_pathβββββββββββββββββ¬β_fileββ
β 1 β 2 β /home/xxxxxxx/a/1.csv β 1.csv β
ββββββ΄βββββ΄βββββββββββββββββββββββ΄ββββββββ
SELECT
*,
_path,
_file
FROM file('b/2.csv', CSV)
ββc1ββ¬βc2ββ¬β_pathβββββββββββββββββ¬β_fileββ
β 3 β 4 β /home/xxxxxxx/b/2.csv β 2.csv β
ββββββ΄βββββ΄βββββββββββββββββββββββ΄ββββββββ
```
It's already working in such way for s3(because there is no real directories here, but still)
So you can write something like:
```
SELECT *, _path, _file
FROM s3('https://clickhouse-public-datasets.s3.amazonaws.com/my-test-bucket-768/{some/some_file_1,another/another_file_3}.csv', NOSIGN, CSVWithNames);
```
Related https://github.com/ClickHouse/ClickHouse/issues/16682 | https://github.com/ClickHouse/ClickHouse/issues/49290 | https://github.com/ClickHouse/ClickHouse/pull/50559 | 6c62c3b4268e4afc70ff8841a0a54a09d8456ecd | 32b765a4ba577acbfdb09a8d400dad8d4ef0f48d | "2023-04-27T23:53:28Z" | c++ | "2023-07-19T04:24:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,254 | ["contrib/isa-l-cmake/CMakeLists.txt", "docs/en/development/build.md"] | aarch64 build broken after #48833 | ISA-L was introduced in #48833, and looks like it's only available for x86 architectures. This completely broke MacOs M1 compilation, and I guess also any other ARM-based architecture. | https://github.com/ClickHouse/ClickHouse/issues/49254 | https://github.com/ClickHouse/ClickHouse/pull/49288 | 3f959848bd6631f0a86e06d7a3eb9bfc1177a8d2 | fd5c8355b04246d66860acab7667ee12ffc9b036 | "2023-04-27T09:41:30Z" | c++ | "2023-04-28T08:43:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,232 | ["docs/en/sql-reference/table-functions/url.md", "docs/ru/sql-reference/table-functions/url.md", "docs/zh/sql-reference/table-functions/url.md", "src/Common/parseRemoteDescription.cpp", "src/Common/parseRemoteDescription.h", "src/Storages/StorageURL.cpp", "src/Storages/StorageURL.h", "tests/queries/0_stateless/00646_url_engine.python", "tests/queries/0_stateless/02725_url_support_virtual_column.reference", "tests/queries/0_stateless/02725_url_support_virtual_column.sql"] | The limit on the maximum number of generated addresses cannot be changed for function `url` | ```
play-eu :) SELECT _path, count() FROM url('https://clickhouse-public-datasets.s3.amazonaws.com/wikistat/original/pageviews-20200101-{00..23}{00..59}00.gz', LineAsString) GROUP BY _path ORDER BY _path
SELECT
_path,
count()
FROM url('https://clickhouse-public-datasets.s3.amazonaws.com/wikistat/original/pageviews-20200101-{00..23}{00..59}00.gz', LineAsString)
GROUP BY _path
ORDER BY _path ASC
Query id: 90cd5580-a567-4af1-9b6f-ea99035053b1
0 rows in set. Elapsed: 0.022 sec.
Received exception from server (version 23.4.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Table function 'remote': first argument generates too many result addresses. (BAD_ARGUMENTS)
play-eu :) SET table_function_remote_max_addresses = 1000000
SET table_function_remote_max_addresses = 1000000
Query id: 31702a49-0a8b-4580-88fc-a87ef2079764
Ok.
0 rows in set. Elapsed: 0.001 sec.
play-eu :) SELECT _path, count() FROM url('https://clickhouse-public-datasets.s3.amazonaws.com/wikistat/original/pageviews-20200101-{00..23}{00..59}00.gz', LineAsString) GROUP BY _path ORDER BY _path
SELECT
_path,
count()
FROM url('https://clickhouse-public-datasets.s3.amazonaws.com/wikistat/original/pageviews-20200101-{00..23}{00..59}00.gz', LineAsString)
GROUP BY _path
ORDER BY _path ASC
Query id: 1e18851a-f9d7-43e4-84ef-3ffd34fb4b14
0 rows in set. Elapsed: 0.001 sec.
Received exception from server (version 23.4.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Table function 'remote': first argument generates too many result addresses. (BAD_ARGUMENTS)
```
Also, take a look at the exception message. It is wrong. | https://github.com/ClickHouse/ClickHouse/issues/49232 | https://github.com/ClickHouse/ClickHouse/pull/49356 | 7bed59e1d2a01eb4313e84413ad47c513451ca17 | d4b89cb643184582dd22fdf754aab92d6c7cbd16 | "2023-04-26T21:32:41Z" | c++ | "2023-05-22T16:10:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 49,171 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02479_nullable_primary_key_non_first_column.reference", "tests/queries/0_stateless/02479_nullable_primary_key_non_first_column.sql", "tests/queries/0_stateless/02479_nullable_primary_key_second_column.reference", "tests/queries/0_stateless/02479_nullable_primary_key_second_column.sql"] | Unexpected query result with nullable primary key | > You have to provide the following information whenever possible.
**Describe what's wrong**
Wrong query result when table has nullable primary key.
```sql
CREATE TABLE dm_metric_small2 (`x` Nullable(Int64), `y` Nullable(Int64), `z` Nullable(Int64)) ENGINE = MergeTree() ORDER BY (x, y, z) SETTINGS index_granularity = 1, allow_nullable_key = 1;
INSERT INTO dm_metric_small2 VALUES (1,1,NULL) (1,1,1) (1,2,0) (1,2,1) (1,2,NULL) (1,2,NULL);
SELECT '--SELECT ALL--';
SELECT * FROM dm_metric_small2;
SELECT '--SELECT NULL--';
SELECT * FROM dm_metric_small2 WHERE (x = 1) AND (y = 1) AND z IS NULL; -- return nothing
```
https://fiddle.clickhouse.com/03d2d249-e981-416c-973d-b6c4a7100f4e
**Does it reproduce on recent release?**
Yes
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
| https://github.com/ClickHouse/ClickHouse/issues/49171 | https://github.com/ClickHouse/ClickHouse/pull/49172 | a8e63abbb48abd70026d71e5e0189f9efcbfeb28 | abe0cfd10f913211059038f67761c5ce633e0b2d | "2023-04-26T10:00:27Z" | c++ | "2023-05-01T10:51:22Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.