status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,707 | ["programs/keeper/keeper_config.xml", "src/Coordination/KeeperStateManager.cpp", "tests/integration/test_keeper_incorrect_config/test.py"] | Cannot mix loopback and non-local hostnames in raft_configuration (ClickHouse-Keeper) | Hi!
I heard on the last release meetup that ClickHouse-Keeper was ready for production and I wanted to try it out. However, I am running into some issues that I cannot really explain.
My staging setup is 3 VMs running Ubuntu 20.04 in Vagrant (Virtualbox). I am deploying ClickHouse-Keeper into the VMs as docker containers running the image `clickhouse/clickhouse-server:22.3.2.2`. The configuration looks like this.
```xml
<clickhouse>
<listen_host>0.0.0.0</listen_host>
<keeper_server>
<tcp_port>2181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<raft_logs_level>trace</raft_logs_level>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>dev-zoo01</hostname>
<port>9234</port>
</server>
<server>
<id>2</id>
<hostname>dev-zoo02</hostname>
<port>9234</port>
</server>
<server>
<id>3</id>
<hostname>dev-zoo03</hostname>
<port>9234</port>
</server>
</raft_configuration>
</keeper_server>
</clickhouse>
```
The docker container is running with `--network=host` on each of the 3 Ubuntu VMs.
This is the same setup that I had when running ClickHouse-Keeper in docker containers locally which worked fine without issue.
However, in the VMs I am running into this error immediately on startup. Additionally, I ran into the same error when testing on production VMs (same setup which worked with Apache Zookeeper).
```
Processing configuration file '/etc/clickhouse-keeper/config.xml'.
Sending crash reports is disabled
Starting ClickHouse Keeper 22.3.2.1 with revision 54460, build id: B537AFA18EAC3AF4, PID 1
starting up
OS Name = Linux, OS Version = 5.4.0-65-generic, OS Architecture = x86_64
DB::Exception: Mixing loopback and non-local hostnames ('dev-zoo01' and 'dev-zoo03') in raft_configuration is not allowed. Different hosts can resolve it to themselves so it's not allowed.
shutting down
Stop SignalListener thread
```
This error is very strange to me and not something that I have encountered before. It seems to be caused by the following line the `/etc/hosts` on the ubuntu VMs
```
127.0.1.1 dev-zoo01.company.com dev-zoo01
```
If I remove that line from the `/etc/hosts` file it seems to work but I am not sure how that will affect the rest of the system. Is this expected behavior? What is the recommended solution to this? | https://github.com/ClickHouse/ClickHouse/issues/35707 | https://github.com/ClickHouse/ClickHouse/pull/36492 | 9fa1edd6ef331d0ae1473036c9a73fb7c3263520 | f0f92341e0da459ad9ec66e74cc7aaf18ce9f4a5 | "2022-03-29T08:47:56Z" | c++ | "2022-04-23T11:54:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,641 | ["docs/en/sql-reference/statements/create/table.md", "docs/ru/sql-reference/statements/create/table.md", "src/Interpreters/InterpreterCreateQuery.cpp", "src/Parsers/ASTColumnDeclaration.cpp", "src/Parsers/ParserCreateQuery.h", "tests/queries/0_stateless/02205_ephemeral_1.reference", "tests/queries/0_stateless/02205_ephemeral_1.sql"] | Allow EPHEMERAL column implicit default | ref #9436
Currently EPHEMERAL column requires an explicit default value provided on CREATE TABLE - we can allow implicit default there
| https://github.com/ClickHouse/ClickHouse/issues/35641 | https://github.com/ClickHouse/ClickHouse/pull/35706 | b56beeca9d78569a48092ab43facc892cf8dbbce | 38993f215f9186c1ac3505b1e09697cd9a468b5d | "2022-03-27T02:10:40Z" | c++ | "2022-04-01T14:49:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,632 | ["docs/en/operations/server-configuration-parameters/settings.md", "docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md", "docs/en/sql-reference/functions/index.md", "docs/ru/operations/server-configuration-parameters/settings.md", "docs/ru/sql-reference/functions/index.md"] | user-defined-functions doesn't work | The example in https://clickhouse.com/docs/en/sql-reference/functions/#executable-user-defined-functions still doesn't work.
I got this error:
Received exception from server (version 22.3.2):
Code: 46. DB::Exception: Received from localhost:9000. DB::Exception: Unknown function test_function_python: While processing test_function_python(toUInt64(2)). (UNKNOWN_FUNCTION)
| https://github.com/ClickHouse/ClickHouse/issues/35632 | https://github.com/ClickHouse/ClickHouse/pull/36757 | d8fa806fca1e2130ad43977db9b5bf16e8f94a2d | c8178f88df9152252e699f57e7a00b471000dc4f | "2022-03-26T03:10:29Z" | c++ | "2022-04-28T20:10:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,625 | ["tests/queries/0_stateless/02887_tuple_element_distributed.reference", "tests/queries/0_stateless/02887_tuple_element_distributed.sql"] | Incorrect processing of queries with aliasing on tuples with distributed tables | **Describe the issue**
Running the following query on a distributed table engine leads to a bad query on the remote table. This works on Clickhouse version 20.7.
`SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1`
**How to reproduce**
* Which ClickHouse server versions are incompatible
21.8.13.6
* `CREATE TABLE` statements for all tables involved
Table definition does not matter since the SELECT clause does not reference a column from the table. The distributed table just needs to reference a remote table
* Queries to run that lead to unexpected result
`SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1`
**Error message and/or stacktrace**
I am adding the debug trace of the SQL which shows both the query run on the distributed table and the remote tables
```
query-new :) SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1
SELECT ((('a', 10) AS x).1) = 'a'
FROM transactions_dist
LIMIT 1
Query id: f040e9fb-a282-4662-8f6b-ddb08fd5e9fc
[query-new] 2022.03.25 17:48:45.185698 [ 58 ] {f040e9fb-a282-4662-8f6b-ddb08fd5e9fc} <Debug> executeQuery: (from 172.21.0.18:34960) SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1
[clickhouse-03] 2022.03.25 17:48:45.195978 [ 58 ] {8cd28a6e-621f-4265-8b9b-a7676d754c31} <Debug> executeQuery: (from 172.21.0.16:40428, initial_query_id: f040e9fb-a282-4662-8f6b-ddb08fd5e9fc) SELECT ((('a', 10) AS x).1) = 'a' FROM default.transactions_local LIMIT 1
[clickhouse-04] 2022.03.25 17:48:45.196940 [ 58 ] {58bd88e6-9c06-4190-a812-9ae7dba84e05} <Debug> executeQuery: (from 172.21.0.16:38224, initial_query_id: f040e9fb-a282-4662-8f6b-ddb08fd5e9fc) SELECT ((('a', 10) AS x).1) = 'a' FROM default.transactions_local LIMIT 1
[clickhouse-03] 2022.03.25 17:48:45.198214 [ 58 ] {8cd28a6e-621f-4265-8b9b-a7676d754c31} <Error> executeQuery: Code: 27, e.displayText() = DB::ParsingException: Cannot parse input: expected '(' before: 'a': while converting 'a' to Tuple(String, UInt8): While processing (tuple(('a', 10) AS x).1) = 'a' (version 21.8.13.6 (official build)) (from 172.21.0.16:40428) (in query: SELECT ((('a', 10) AS x).1) = 'a' FROM default.transactions_local LIMIT 1), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8febd9a in /usr/bin/clickhouse
1. DB::throwAtAssertionFailed(char const*, DB::ReadBuffer&) @ 0x9045a17 in /usr/bin/clickhouse
2. ? @ 0xff62a91 in /usr/bin/clickhouse
3. ? @ 0x10802302 in /usr/bin/clickhouse
4. DB::convertFieldToType(DB::Field const&, DB::IDataType const&, DB::IDataType const*) @ 0x10801351 in /usr/bin/clickhouse
5. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeWithConstString(std::__1::shared_ptr<DB::IDataType const> const&, DB::IColumn const*, DB::IColumn const*, std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xcb4017f in /usr/bin/clickhouse
6. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xcb2d912 in /usr/bin/clickhouse
7. DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb18a8ea in /usr/bin/clickhouse
8. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb18996e in /usr/bin/clickhouse
9. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfb39925 in /usr/bin/clickhouse
10. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfb39f52 in /usr/bin/clickhouse
11. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x1002b77f in /usr/bin/clickhouse
12. DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x102a7312 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x102b146c in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x102b94de in /usr/bin/clickhouse
15. DB::InDepthNodeVisitor<DB::ActionsMatcher, true, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x10277f97 in /usr/bin/clickhouse
16. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x10277c85 in /usr/bin/clickhouse
17. DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x102854c8 in /usr/bin/clickhouse
18. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x1028ab3f in /usr/bin/clickhouse
19. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x10488b26 in /usr/bin/clickhouse
20. ? @ 0x1048166b in /usr/bin/clickhouse
21. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x1047bb58 in /usr/bin/clickhouse
22. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1047a25e in /usr/bin/clickhouse
23. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x106571fa in /usr/bin/clickhouse
24. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x10265837 in /usr/bin/clickhouse
25. ? @ 0x1081bc66 in /usr/bin/clickhouse
26. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0x1081a5a3 in /usr/bin/clickhouse
27. DB::TCPHandler::runImpl() @ 0x110c08be in /usr/bin/clickhouse
28. DB::TCPHandler::run() @ 0x110d3859 in /usr/bin/clickhouse
29. Poco::Net::TCPServerConnection::start() @ 0x13c4ca2f in /usr/bin/clickhouse
30. Poco::Net::TCPServerDispatcher::run() @ 0x13c4e4ba in /usr/bin/clickhouse
31. Poco::PooledThread::run() @ 0x13d80739 in /usr/bin/clickhouse
[query-new] 2022.03.25 17:48:45.220933 [ 58 ] {f040e9fb-a282-4662-8f6b-ddb08fd5e9fc} <Error> executeQuery: Code: 27, e.displayText() = DB::Exception: Received from clickhouse-03:9000. DB::Exception: Cannot parse input: expected '(' before: 'a': while converting 'a' to Tuple(String, UInt8): While processing (tuple(('a', 10) AS x).1) = 'a'. Stack trace:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8febd9a in /usr/bin/clickhouse
1. DB::throwAtAssertionFailed(char const*, DB::ReadBuffer&) @ 0x9045a17 in /usr/bin/clickhouse
2. ? @ 0xff62a91 in /usr/bin/clickhouse
3. ? @ 0x10802302 in /usr/bin/clickhouse
4. DB::convertFieldToType(DB::Field const&, DB::IDataType const&, DB::IDataType const*) @ 0x10801351 in /usr/bin/clickhouse
5. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeWithConstString(std::__1::shared_ptr<DB::IDataType const> const&, DB::IColumn const*, DB::IColumn const*, std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xcb4017f in /usr/bin/clickhouse
6. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xcb2d912 in /usr/bin/clickhouse
7. DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb18a8ea in /usr/bin/clickhouse
8. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb18996e in /usr/bin/clickhouse
9. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfb39925 in /usr/bin/clickhouse
10. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfb39f52 in /usr/bin/clickhouse
11. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x1002b77f in /usr/bin/clickhouse
12. DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x102a7312 in /usr/bin/clickhouse
13. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x102b146c in /usr/bin/clickhouse
14. DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x102b94de in /usr/bin/clickhouse
15. DB::InDepthNodeVisitor<DB::ActionsMatcher, true, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x10277f97 in /usr/bin/clickhouse
16. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x10277c85 in /usr/bin/clickhouse
17. DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x102854c8 in /usr/bin/clickhouse
18. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x1028ab3f in /usr/bin/clickhouse
19. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x10488b26 in /usr/bin/clickhouse
20. ? @ 0x1048166b in /usr/bin/clickhouse
21. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x1047bb58 in /usr/bin/clickhouse
22. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1047a25e in /usr/bin/clickhouse
23. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x106571fa in /usr/bin/clickhouse
24. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x10265837 in /usr/bin/clickhouse
25. ? @ 0x1081bc66 in /usr/bin/clickhouse
26. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0x1081a5a3 in /usr/bin/clickhouse
27. DB::TCPHandler::runImpl() @ 0x110c08be in /usr/bin/clickhouse
28. DB::TCPHandler::run() @ 0x110d3859 in /usr/bin/clickhouse
29. Poco::Net::TCPServerConnection::start() @ 0x13c4ca2f in /usr/bin/clickhouse
30. Poco::Net::TCPServerDispatcher::run() @ 0x13c4e4ba in /usr/bin/clickhouse
31. Poco::PooledThread::run() @ 0x13d80739 in /usr/bin/clickhouse
: While executing Remote (version 21.8.13.6 (official build)) (from 172.21.0.18:34960) (in query: SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8febd9a in /usr/bin/clickhouse
1. DB::readException(DB::ReadBuffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0x904baf4 in /usr/bin/clickhouse
2. DB::Connection::receiveException() const @ 0x10fe2b22 in /usr/bin/clickhouse
3. DB::Connection::receivePacket() @ 0x10fec8c9 in /usr/bin/clickhouse
4. DB::MultiplexedConnections::receivePacketUnlocked(std::__1::function<void (int, Poco::Timespan, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>) @ 0x1100ecbc in /usr/bin/clickhouse
5. DB::RemoteQueryExecutorRoutine::operator()(boost::context::fiber&&) const @ 0xfe747e2 in /usr/bin/clickhouse
6. void boost::context::detail::fiber_entry<boost::context::detail::fiber_record<boost::context::fiber, FiberStack&, DB::RemoteQueryExecutorRoutine> >(boost::context::detail::transfer_t) @ 0xfe746ae in /usr/bin/clickhouse
0 rows in set. Elapsed: 0.043 sec.
Received exception from server (version 21.8.13):
Code: 27. DB::Exception: Received from clickhouse-server:9000. DB::Exception: Received from clickhouse-03:9000. DB::Exception: Cannot parse input: expected '(' before: 'a': while converting 'a' to Tuple(String, UInt8): While processing (tuple(('a', 10) AS x).1) = 'a'. (CANNOT_PARSE_INPUT_ASSERTION_FAILED)
```
The same query when run on 20.7
```
query-old :) SELECT version()
SELECT version()
Query id: 5be99491-8abf-476f-a8fd-6c790917ecac
[query-old] 2022.03.29 17:51:25.146025 [ 103 ] {5be99491-8abf-476f-a8fd-6c790917ecac} <Debug> executeQuery: (from 172.21.0.21:54622) SELECT version()
ββversion()ββ
β 20.7.4.11 β
βββββββββββββ
[query-old] 2022.03.29 17:51:25.147978 [ 103 ] {5be99491-8abf-476f-a8fd-6c790917ecac} <Information> executeQuery: Read 1 rows, 1.00 B in 0.0018613 sec., 537 rows/sec., 537.26 B/sec.
[query-old] 2022.03.29 17:51:25.148090 [ 103 ] {5be99491-8abf-476f-a8fd-6c790917ecac} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
1 rows in set. Elapsed: 0.004 sec.
query-old :) SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1
SELECT ((('a', 10) AS x).1) = 'a'
FROM transactions_dist
LIMIT 1
Query id: 05bc097a-0026-4db7-ac44-90b225ad8d11
[query-old] 2022.03.29 17:51:29.314845 [ 103 ] {05bc097a-0026-4db7-ac44-90b225ad8d11} <Debug> executeQuery: (from 172.21.0.21:54622) SELECT equals(tupleElement(tuple('a', 10) AS x, 1), 'a') FROM transactions_dist LIMIT 1
[clickhouse-01] 2022.03.29 17:51:29.318555 [ 110 ] {e8fca792-0793-4302-b865-e1a8df6c61c2} <Debug> executeQuery: (from 172.21.0.10:37338, initial_query_id: 05bc097a-0026-4db7-ac44-90b225ad8d11) SELECT ((('a', 10) AS x).1) = 'a' FROM default.transactions_local LIMIT 1
[clickhouse-01] 2022.03.29 17:51:29.319620 [ 110 ] {e8fca792-0793-4302-b865-e1a8df6c61c2} <Debug> default.transactions_local (SelectExecutor): Key condition: unknown
[clickhouse-01] 2022.03.29 17:51:29.319654 [ 110 ] {e8fca792-0793-4302-b865-e1a8df6c61c2} <Debug> default.transactions_local (SelectExecutor): MinMax index condition: unknown
[clickhouse-01] 2022.03.29 17:51:29.320040 [ 110 ] {e8fca792-0793-4302-b865-e1a8df6c61c2} <Debug> default.transactions_local (SelectExecutor): Selected 8 parts by date, 8 parts by key, 8 marks by primary key, 8 marks to read from 8 ranges
ββequals(tupleElement(tuple('a', 10), 1), 'a')ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββ
[clickhouse-02] 2022.03.29 17:51:29.325999 [ 112 ] {f80743c3-7286-488e-aeac-a01ee5055906} <Debug> executeQuery: (from 172.21.0.10:46086, initial_query_id: 05bc097a-0026-4db7-ac44-90b225ad8d11) SELECT ((('a', 10) AS x).1) = 'a' FROM default.transactions_local LIMIT 1
[clickhouse-02] 2022.03.29 17:51:29.327260 [ 112 ] {f80743c3-7286-488e-aeac-a01ee5055906} <Debug> default.transactions_local (SelectExecutor): Key condition: unknown
[clickhouse-02] 2022.03.29 17:51:29.327298 [ 112 ] {f80743c3-7286-488e-aeac-a01ee5055906} <Debug> default.transactions_local (SelectExecutor): MinMax index condition: unknown
[clickhouse-02] 2022.03.29 17:51:29.327677 [ 112 ] {f80743c3-7286-488e-aeac-a01ee5055906} <Debug> default.transactions_local (SelectExecutor): Selected 8 parts by date, 8 parts by key, 8 marks by primary key, 8 marks to read from 8 ranges
[query-old] 2022.03.29 17:51:29.332370 [ 103 ] {05bc097a-0026-4db7-ac44-90b225ad8d11} <Debug> MemoryTracker: Peak memory usage (for query): 4.02 MiB.
1 rows in set. Elapsed: 0.019 sec.
```
**Additional context**
None
| https://github.com/ClickHouse/ClickHouse/issues/35625 | https://github.com/ClickHouse/ClickHouse/pull/54960 | aa37814b3a5518c730eff221b86645e153363306 | ca7796ba85d0757071daad8ac1b3109fbcbd6309 | "2022-03-25T18:04:21Z" | c++ | "2023-09-24T21:17:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,567 | ["src/Functions/caseWithExpression.cpp", "tests/queries/0_stateless/02244_casewithexpression_return_type.reference", "tests/queries/0_stateless/02244_casewithexpression_return_type.sql"] | Got wrong result for nested case when expression | **If we use nested case when expression, it may store data with wrong data type.**
> How to reproduce
- release version: 21.13.1.1
- execute following sql with clickhouse client:
```sql
SELECT "number", CASE "number"
WHEN 3 THEN 55
WHEN 6 THEN 77
WHEN 9 THEN 95
ELSE CASE
WHEN "number"=1 THEN 10
WHEN "number"=10 THEN 100
ELSE 555555
END
END AS "LONG_COL_0"
FROM `system`.numbers
LIMIT 20;
```
- it will get wrong result. (35 should be 555555)
```shell
SELECT
number,
caseWithExpression(number, 3, 55, 6, 77, 9, 95, multiIf(number = 1, 10, number = 10, 100, 555555)) AS LONG_COL_0
FROM system.numbers
LIMIT 20
Query id: a713aa47-68d0-4a22-abd8-256fbcdcb073
ββnumberββ¬βLONG_COL_0ββ
β 0 β 35 β
β 1 β 10 β
β 2 β 35 β
β 3 β 55 β
β 4 β 35 β
β 5 β 35 β
β 6 β 77 β
β 7 β 35 β
β 8 β 35 β
β 9 β 95 β
β 10 β 100 β
β 11 β 35 β
β 12 β 35 β
β 13 β 35 β
β 14 β 35 β
β 15 β 35 β
β 16 β 35 β
β 17 β 35 β
β 18 β 35 β
β 19 β 35 β
ββββββββββ΄βββββββββββββ
```
It seems use **UInt8** data type to store number 555555.
But if we use nested **multiIf** expression, result will be correct.
Can u help to resolve **case when expression** problem? | https://github.com/ClickHouse/ClickHouse/issues/35567 | https://github.com/ClickHouse/ClickHouse/pull/35576 | cfb12aff6f6976a33f665c8c74684c8a2971482e | e27a68ef8ce1f1c5f1e02d3c337fe16c011de6c8 | "2022-03-24T08:53:22Z" | c++ | "2022-03-25T15:01:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,548 | ["src/Interpreters/HashJoin.cpp", "src/Interpreters/join_common.cpp", "tests/queries/0_stateless/02244_lowcardinality_hash_join.reference", "tests/queries/0_stateless/02244_lowcardinality_hash_join.sql"] | Unable to use LowCardinality column in JOIN ON condition containing OR operator | **Describe what's wrong**
Including a LowCardinality column in an `ON` clause containing an `OR` operator results in this exception:
```
DB::Exception: Expected ColumnLowCardinality, got String: While executing JoiningTransform. (ILLEGAL_COLUMN)
```
**Does it reproduce on recent release?**
22.1.2
**How to reproduce**
Minimal repro case:
```sql
WITH t1 AS
(
SELECT toLowCardinality('x') AS col
)
SELECT *
FROM t1
INNER JOIN t1 AS t2 ON (t1.col = t2.col) OR (t1.col = t2.col)
```
However, casting to String works:
```sql
WITH t1 AS
(
SELECT toLowCardinality('x') AS col
)
SELECT *
FROM t1
INNER JOIN t1 AS t2 ON (CAST(t1.col AS String) = CAST(t2.col AS String)) OR (CAST(t1.col AS String) = CAST(t2.col AS String))
```
Without `OR`, casting is not necessary:
```sql
WITH t1 AS
(
SELECT toLowCardinality('x') AS col
)
SELECT *
FROM t1
INNER JOIN t1 AS t2 ON t1.col = t2.col
```
**Expected behavior**
It should not be necessary to use CAST when joining on a LowCardinality column with multiple `OR` expressions. | https://github.com/ClickHouse/ClickHouse/issues/35548 | https://github.com/ClickHouse/ClickHouse/pull/35616 | 0f9cc9f924957ac619eb49f5fd71466e600d9664 | c4d0cc7c848cd9d9a1475332440b3dba48c1cd60 | "2022-03-23T19:19:38Z" | c++ | "2022-03-28T06:28:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,528 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02243_in_ip_address.reference", "tests/queries/0_stateless/02243_in_ip_address.sql"] | Invalid IPv6 value (CANNOT_PARSE_DOMAIN_VALUE_FROM_STRING) with subquery | Inclusion (`IN`/`NOT IN`) test does not work when the subquery returns IPv6 values.
**Does it reproduce on recent release?**
Yes, on version `22.3.2`.
**How to reproduce**
On a stock Docker container (`clickhouse/clickhouse-server:22`):
```sql
CREATE TABLE IF NOT EXISTS test_ipv6 (a IPv6) ENGINE = MergeTree ORDER BY a;
INSERT INTO test_ipv6 VALUES ('::ffff:1.1.1.1'),('::ffff:2.2.2.2');
```
```sql
SELECT a FROM test_ipv6 FORMAT TabSeparated;
-- ::ffff:1.1.1.1
-- ::ffff:2.2.2.2
-- OK!
SELECT a FROM test_ipv6 WHERE a <= toIPv6('::ffff:1.1.1.1') FORMAT TabSeparated;
-- ::ffff:1.1.1.1
-- OK!
SELECT a
FROM test_ipv6
WHERE a IN (
SELECT a
FROM test_ipv6
WHERE a <= toIPv6('::ffff:1.1.1.1')
);
-- Received exception from server (version 22.3.2):
-- Code: 441. DB::Exception: Received from localhost:9000. DB::Exception: Invalid IPv6 value: while executing 'FUNCTION in(a : 0, __set :: 1) -> in(a, _subquery9) UInt8 : 2'. (CANNOT_PARSE_DOMAIN_VALUE_FROM_STRING)
```
**Expected behavior**
The last query should work and return `::ffff:1.1.1.1`.
**Error message and/or stacktrace**
```
Received exception from server (version 22.3.2):
Code: 441. DB::Exception: Received from localhost:9000. DB::Exception: Invalid IPv6 value: while executing 'FUNCTION in(a : 0, __set :: 1) -> in(a, _subquery9) UInt8 : 2'. (CANNOT_PARSE_DOMAIN_VALUE_FROM_STRING)
```
**Additional context**
- Same issue with two different tables (e.g. `test_ipv6_1` and `test_ipv6_2`).
- Works on ClickHouse 21.
| https://github.com/ClickHouse/ClickHouse/issues/35528 | https://github.com/ClickHouse/ClickHouse/pull/35534 | 1df1721648367cd2194cb89389afcd93485bed40 | 4b88c6f934832df517607d30f82bd40d4494095c | "2022-03-23T12:36:03Z" | c++ | "2022-03-24T00:26:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,521 | ["src/Dictionaries/FlatDictionary.cpp", "src/Dictionaries/HashedArrayDictionary.cpp", "src/Dictionaries/HashedDictionary.cpp", "src/Dictionaries/HierarchyDictionariesUtils.cpp", "src/Dictionaries/HierarchyDictionariesUtils.h", "src/Dictionaries/tests/gtest_hierarchy_dictionaries_utils.cpp", "src/Functions/FunctionsExternalDictionaries.h", "tests/queries/0_stateless/02316_hierarchical_dictionaries_nullable_parent_key.reference", "tests/queries/0_stateless/02316_hierarchical_dictionaries_nullable_parent_key.sql"] | HIERARCHICAL dictionary not support nullable parent_id with function dictGetDescendants | ```
create table default.test_parent
(
id UInt64,
parent_id Nullable(UInt64)
) engine = MergeTree
create dictionary default.d4test_parent
( id UInt64,
parent_id Nullable(UInt64) HIERARCHICAL
)
PRIMARY KEY id
SOURCE (CLICKHOUSE(DB 'default' TABLE 'test_parent'))
LIFETIME (MIN 0 MAX 0)
LAYOUT (HASHED)
;
insert into default.test_parent
values (1, null),
(2, 3),
(3, 1),
(4, 2)
;
system reload dictionary default.d4test_parent
;
select dictGetDescendants('default.d4test_parent', toUInt64(2))
;
```
Code: 53. DB::Exception: default.d4test_parent (443f6664-e217-44ec-80e2-bbec982b4963): type mismatch: column has wrong type expected UInt64: While processing dictGetDescendants('default.d4test_parent', 2). (TYPE_MISMATCH) (version 22.3.2.1)
On old versions no errors
```
select version()
-- 21.11.6.7
;
select dictGetDescendants('default.d4test_parent', toUInt64(2))
-- [4]
``` | https://github.com/ClickHouse/ClickHouse/issues/35521 | https://github.com/ClickHouse/ClickHouse/pull/37805 | 9fdc783eacdfae7fbf224fa2bda4e982aaeb5c13 | 4e160105b9cab498bf7381c89e0de72b55efa56c | "2022-03-23T10:07:59Z" | c++ | "2022-06-08T10:36:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,505 | ["src/Interpreters/InterpreterSelectQuery.cpp", "tests/queries/0_stateless/02242_optimize_to_subcolumns_no_storage.reference", "tests/queries/0_stateless/02242_optimize_to_subcolumns_no_storage.sql"] | The fuzzer error | (you don't have to strictly follow this form)
**Describe the bug**
A link to the report: https://s3.amazonaws.com/clickhouse-test-reports/35466/d81c58b5d9be45ac69c3859b5cbe97080711f6d9/fuzzer_astfuzzerubsan,actions//report.html
The commit: https://github.com/ClickHouse/ClickHouse/commit/d81c58b5d9be45ac69c3859b5cbe97080711f6d9 | https://github.com/ClickHouse/ClickHouse/issues/35505 | https://github.com/ClickHouse/ClickHouse/pull/35512 | 89e06308989337321046e33597be39b94c6c456f | 1f940f9e3b503693dd020aa652692029f46dc6d5 | "2022-03-22T15:16:22Z" | c++ | "2022-03-23T10:31:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,483 | ["src/Common/ProfileEvents.cpp", "src/IO/ReadBufferFromS3.cpp", "src/IO/WriteBufferFromS3.cpp", "tests/integration/test_profile_events_s3/test.py"] | it looks like S3 profile events numbers are inflated | ProfileEvents S3ReadMicroseconds RealTimeMicroseconds looks inconsistent.
Elapsed: 0.193 sec. / query_duration_ms = 192
VS
S3ReadMicroseconds = 350736 / RealTimeMicroseconds = 383210.
```sql
INSERT INTO FUNCTION s3(s3_mydata, url = 'https://s3.us-east-1.amazonaws.com/test..../test_file22222.tsv', structure = 'A Int64', format = 'TSV', compression_method = 'none') SELECT number AS A
FROM numbers(10000)
set max_threads=1;
SELECT count()
FROM s3(s3_mydata, url = 'https://s3.us-east-1.amazonaws.com/test..../test_file22222.tsv', structure = 'A Int64', format = 'TSV', compression_method = 'none')
Query id: 2f63b92a-eae3-45c5-a30c-2a8b7681147a
ββcount()ββ
β 10000 β
βββββββββββ
1 rows in set. Elapsed: 0.193 sec. Processed 10.00 thousand rows, 80.00 KB (51.72 thousand rows/s., 413.74 KB/s.)
SELECT
query_duration_ms,
k,
v
FROM system.query_log
ARRAY JOIN
ProfileEvents.keys AS k,
ProfileEvents.values AS v
WHERE (query_id = '2f63b92a-eae3-45c5-a30c-2a8b7681147a') AND (type = 2) AND (event_date = today())
ββquery_duration_msββ¬βkββββββββββββββββββββββββββββββββββ¬βββββββvββ
β 192 β Query β 1 β
β 192 β SelectQuery β 1 β
β 192 β IOBufferAllocs β 1 β
β 192 β IOBufferAllocBytes β 1048576 β
β 192 β ArenaAllocChunks β 1 β
β 192 β ArenaAllocBytes β 4096 β
β 192 β TableFunctionExecute β 1 β
β 192 β NetworkReceiveElapsedMicroseconds β 7 β
β 192 β NetworkSendElapsedMicroseconds β 214 β
β 192 β NetworkSendBytes β 3427 β
β 192 β SelectedRows β 10000 β
β 192 β SelectedBytes β 80000 β
β 192 β ContextLock β 28 β
β 192 β RWLockAcquiredReadLocks β 1 β
β 192 β RealTimeMicroseconds β 383210 β --- ?
β 192 β UserTimeMicroseconds β 5004 β
β 192 β SystemTimeMicroseconds β 492 β
β 192 β SoftPageFaults β 34 β
β 192 β OSCPUWaitMicroseconds β 37 β
β 192 β OSCPUVirtualTimeMicroseconds β 5494 β
β 192 β OSWriteBytes β 4096 β
β 192 β OSReadChars β 53248 β
β 192 β OSWriteChars β 2048 β
β 192 β CreatedHTTPConnections β 1 β
β 192 β S3ReadMicroseconds β 350736 β --- ?
β 192 β S3ReadBytes β 48890 β
β 192 β S3ReadRequestsCount β 1 β
βββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββ΄ββββββββββ
| https://github.com/ClickHouse/ClickHouse/issues/35483 | https://github.com/ClickHouse/ClickHouse/pull/36572 | 02291d14645b3b893eadc45b3a81cb777595f239 | 7e3a805ae3fca0b1a3a2143206be566f8dbffa29 | "2022-03-21T17:49:36Z" | c++ | "2022-05-05T09:44:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,432 | ["src/Functions/h3kRing.cpp", "tests/queries/0_stateless/01042_h3_k_ring.reference", "tests/queries/0_stateless/01042_h3_k_ring.sql"] | Illegal type UInt8 of argument 2 of function h3kRing. | **Describe the unexpected behaviour**
When I use function h3kRing,illegal type error occured.
On issue [34708](https://github.com/ClickHouse/ClickHouse/issues/34708),h3 functions have illegal parameter type.Old problem resolved in ClickHouse 22.3.2.1.But error occured when I execute query:
```sql
select AAA,BBB,CCC,DDD,h3kRing(DDD,1),EEE,FFF,GGG from core.dis_last;
```
```
2022.03.20 02:18:23.816553 [ 49963 ] {4d9fcd3e-f7c8-41e8-b0d2-1f321658437b} <Error> DynamicQueryHandler: Code: 43. DB::Exception: Illegal type UInt8 of argument 2 of function h3kRing. Must be UInt16: While processing AAA,BBB,CCC,DDD,h3kRing(DDD,1),EEE,FFF,GGG. (ILLEGAL_TYPE_OF_ARGUMENT), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, int&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0x10dd9f61 in /usr/bin/clickhouse
2. ? @ 0x10de521c in /usr/bin/clickhouse
3. DB::IFunction::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0xc9bd641 in /usr/bin/clickhouse
4. DB::IFunctionOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x139ce039 in /usr/bin/clickhouse
5. DB::IFunctionOverloadResolver::build(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x139cecd6 in /usr/bin/clickhouse
6. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x1403a385 in /usr/bin/clickhouse
7. DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x141e4ec5 in /usr/bin/clickhouse
8. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x141eaa98 in /usr/bin/clickhouse
9. DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x141ed778 in /usr/bin/clickhouse
10. DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x141bb337 in /usr/bin/clickhouse
11. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x141bb138 in /usr/bin/clickhouse
12. DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x141c7109 in /usr/bin/clickhouse
13. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x141cbd2d in /usr/bin/clickhouse
14. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x144d648d in /usr/bin/clickhouse
15. ? @ 0x144ce2a3 in /usr/bin/clickhouse
16. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) @ 0x144c8caa in /usr/bin/clickhouse
17. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x144c7334 in /usr/bin/clickhouse
18. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1468e4ca in /usr/bin/clickhouse
19. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1468c910 in /usr/bin/clickhouse
20. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x1446da50 in /usr/bin/clickhouse
21. ? @ 0x148d096c in /usr/bin/clickhouse
22. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x148d3fca in /usr/bin/clickhouse
23. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x151cdae7 in /usr/bin/clickhouse
24. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x151d2292 in /usr/bin/clickhouse
25. DB::HTTPServerConnection::run() @ 0x1545979b in /usr/bin/clickhouse
26. Poco::Net::TCPServerConnection::start() @ 0x164b264f in /usr/bin/clickhouse
27. Poco::Net::TCPServerDispatcher::run() @ 0x164b4aa1 in /usr/bin/clickhouse
28. Poco::PooledThread::run() @ 0x16671e49 in /usr/bin/clickhouse
29. Poco::ThreadImpl::runnableEntry(void*) @ 0x1666f1a0 in /usr/bin/clickhouse
30. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
31. __clone @ 0xfeb0d in /usr/lib64/libc-2.17.so
(version 22.3.2.1)
```
I can bypass the error using:
```sql
select AAA,BBB,CCC,DDD,h3kRing(DDD,toUInt16(1)),EEE,FFF,GGG from core.dis_last;
```
**How to reproduce**
ClickHouse 22.3.2.1 Revision 54455.
core.dis_last DDL:
```sql
create table core.dis_last (
AAA DateTime,
BBB UInt16,
CCC String,
DDD UInt64,
EEE String,
FFF String,
GGG String
)
```
**Expected behavior**
No need to add toUInt64() on function `h3kRing(h3index, k)` parameter k.
| https://github.com/ClickHouse/ClickHouse/issues/35432 | https://github.com/ClickHouse/ClickHouse/pull/37189 | fe2aa1861f38d754c9f4671ced85a423bdd1acdd | 3f18d7da33aa49e933667ba45ca8076df3cd7893 | "2022-03-19T18:27:28Z" | c++ | "2022-05-13T20:53:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,407 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/MergeTree/MergeTreeBlockReadUtils.cpp", "src/Storages/MergeTree/MergeTreeBlockReadUtils.h", "src/Storages/MergeTree/MergeTreeSequentialSource.cpp", "src/Storages/StorageSnapshot.cpp", "src/Storages/StorageSnapshot.h", "tests/queries/0_stateless/01825_type_json_missed_values.reference", "tests/queries/0_stateless/01825_type_json_missed_values.sql"] | MergeTreeThread - Can't adjust last granule with Object('JSON') datatype | **Describe what's wrong**
When inserting data to a table with `Object('JSON')` datatype, querying it might raise an error from MergeTreeThread.
After calling `OPTIMIZE TABLE test_json FINAL` the issue goes away.
**Does it reproduce on recent release?**
It reproduces on `22.3.2 revision 54455.`
**How to reproduce**
Create a table with an `Object('JSON')` datatype:
```
create table production.test_json (data Object('JSON')) ENGINE = MergeTree() ORDER BY tuple();
```
and inserted some data into it:
```
insert into test_json SELECT replace(toString(m), '\'', '"') as data FROM (SELECT properties as m from production.events_v4 where notEmpty(properties) limit 100000000)
```
When you describe the table, we can see it has `data.reason`:
```
β data.reason β String β β β β β β 1 β
```
However, querying for it raises an error:
```
ec2.internal :) select data.reason from test_json where data.reason <> '' limit 10;
SELECT data.reason
FROM test_json
WHERE data.reason != ''
LIMIT 10
Query id: 91e682c1-2650-4f3b-99d8-5eafdd0e2839
[ip-172-31-90-127] 2022.03.18 13:29:50.471519 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Debug> executeQuery: (from 127.0.0.1:60630) select data.reason from test_json where data.reason <> '' limit 10;
[ip-172-31-90-127] 2022.03.18 13:29:50.471953 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Trace> ContextAccess (default): Access granted: SELECT(`data.reason`) ON production.test_json
[ip-172-31-90-127] 2022.03.18 13:29:50.472015 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Trace> ContextAccess (default): Access granted: SELECT(`data.reason`) ON production.test_json
[ip-172-31-90-127] 2022.03.18 13:29:50.472041 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[ip-172-31-90-127] 2022.03.18 13:29:50.472134 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Debug> production.test_json (463749c6-4417-490e-939b-66e00b80baa2) (SelectExecutor): Key condition: unknown
[ip-172-31-90-127] 2022.03.18 13:29:50.472401 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Debug> production.test_json (463749c6-4417-490e-939b-66e00b80baa2) (SelectExecutor): Selected 8/8 parts by partition key, 8 parts by primary key, 12453/12453 marks by primary key, 12453 marks to read from 8 ranges
[ip-172-31-90-127] 2022.03.18 13:29:50.472457 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Debug> production.test_json (463749c6-4417-490e-939b-66e00b80baa2) (SelectExecutor): Reading approx. 102000000 rows with 2 streams
[ip-172-31-90-127] 2022.03.18 13:29:50.489145 [ 311286 ] {91e682c1-2650-4f3b-99d8-5eafdd0e2839} <Error> executeQuery: Code: 49. DB::Exception: Can't adjust last granule because it has 8161 rows, but try to subtract 65505 rows.: While executing MergeTreeThread. (LOGICAL_ERROR) (version 22.3.2.1) (from 127.0.0.1:60630) (in query: select data.reason from test_json where data.reason <> '' limit 10;), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0xbeaf7d9 in /usr/bin/clickhouse
2. DB::MergeTreeRangeReader::ReadResult::adjustLastGranule() @ 0x1577c250 in /usr/bin/clickhouse
3. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x15780044 in /usr/bin/clickhouse
4. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x1577e815 in /usr/bin/clickhouse
5. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x15774028 in /usr/bin/clickhouse
6. DB::MergeTreeBaseSelectProcessor::readFromPart() @ 0x1577512d in /usr/bin/clickhouse
7. DB::MergeTreeBaseSelectProcessor::generate() @ 0x157738a0 in /usr/bin/clickhouse
8. DB::ISource::tryGenerate() @ 0x1548cdf5 in /usr/bin/clickhouse
9. DB::ISource::work() @ 0x1548c9ba in /usr/bin/clickhouse
10. DB::SourceWithProgress::work() @ 0x156e4282 in /usr/bin/clickhouse
11. DB::ExecutionThreadContext::executeTask() @ 0x154ad143 in /usr/bin/clickhouse
12. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x154a0b9e in /usr/bin/clickhouse
13. ? @ 0x154a2504 in /usr/bin/clickhouse
14. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa584c97 in /usr/bin/clickhouse
15. ? @ 0xa58881d in /usr/bin/clickhouse
16. ? @ 0x7f421c9c6609 in ?
17. __clone @ 0x7f421c8eb163 in ?
0 rows in set. Elapsed: 0.018 sec.
Received exception from server (version 22.3.2):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Can't adjust last granule because it has 8161 rows, but try to subtract 65505 rows.: While executing MergeTreeThread. (LOGICAL_ERROR)
```
**Expected behavior**
No exception raised. | https://github.com/ClickHouse/ClickHouse/issues/35407 | https://github.com/ClickHouse/ClickHouse/pull/35687 | e3d772abd09eff38827f4d05df257240e945dd33 | 1cba31c3059cd1814795eb3502df5fab90780c06 | "2022-03-18T13:50:43Z" | c++ | "2022-03-29T22:21:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,359 | ["src/AggregateFunctions/AggregateFunctionMap.h", "tests/queries/0_stateless/02351_Map_combinator_dist.reference", "tests/queries/0_stateless/02351_Map_combinator_dist.sql"] | sumMap values return strangely huge number when query on distributed table with long data range | Hi , I'm trying to use sumMap function to do a 'word count' compute but find something strange.
The data field formart looks like below , `switch_pcpBuckets1Min Array(Int32)` represent which minute user is active druing this hour.
```
SELECT
interval_startTimeMs,
switch_pcpBuckets1Min
FROM test_all
LIMIT 10
Query id: 8aa42523-a6aa-4d1f-8f52-6bf62d3ed1a7
βββββinterval_startTimeMsββ¬βswitch_pcpBuckets1Minββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2022-03-12 02:00:00.000 β [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] β
β 2022-03-12 02:00:00.000 β [] β
β 2022-03-12 02:00:00.000 β [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59] β
β 2022-03-12 02:00:00.000 β [] β
β 2022-03-12 02:00:00.000 β [] β
β 2022-03-12 02:00:00.000 β [31,32,33,34,35,36,37,38,39,40] β
β 2022-03-12 02:00:00.000 β [1,2,3,4,5,6,7,8] β
β 2022-03-12 02:00:00.000 β [0,1] β
β 2022-03-12 02:00:00.000 β [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59] β
β 2022-03-12 02:00:00.000 β [49,50,51,52,53,54,55,56,57,58,59] β
ββββββββββββββ
```
I have 3 ck nodes with local table on each , and one distributed table base on them. The data distribution between is cluster is like below , each node stored 8 hours data every day :
```
ck1: day1.00,day1.03,day1.06,day1.09....day1.18,day1.21,day2.00,day2.03,....day2.18,day2.21.......
ck2: day1.01,day1.04,day1.07,day1.10....day1.19,day1.22,day2.01,day2.04,....day2.18,day2.22.......
ck3: day1.02,day1.05,day1.08,day1.11....day1.20,day1.23,day2.02,day2.05,....day2.18,day2.23.......
```
when I apply `sumMap (cast(arrayMap(x -> (x,1) , switch_pcpBuckets1Min) , 'Map(UInt8,Int64)')) `funtion on distributed table I got some strange values
```
SELECT
interval_startTimeMs,
sumMap(CAST(arrayMap(x -> (x, 1), switch_pcpBuckets1Min), 'Map(UInt8,Int64)'))
FROM test_all
WHERE notEmpty(switch_pcpBuckets1Min) AND (interval_startTimeMs >= '2022-03-13 15:00:00.000') AND (interval_startTimeMs < '2022-03-15 11:00:00.000')
GROUP BY interval_startTimeMs
ORDER BY interval_startTimeMs ASC
```
result

or
```
........
β 2022-03-14 13:00:00.000 β {0:1888498,1:1806408,2:1738233,3:1714562,4:1716166,5:1718846,6:1721077,7:1723416,8:1726652,9:1729414,10:1732370,11:1735659,12:1739089,13:1743080,14:1746563,15:1750973,16:1752769,17:1756472,18:1759138,19:1762198,20:1769251,21:1767879,22:1770599,23:1775166,24:1776618,25:1779658,26:1782262,27:1785372,28:1788181,29:1791996,30:1806137,31:1828201,32:1799067,33:1798101,34:1799876,35:1803645,36:1806954,37:1809457,38:1812673,39:1814831,40:1818984,41:1821903,42:1824868,43:1826708,44:1829861,45:1832978,46:1836875,47:1835713,48:1837610,49:1838824,50:1840808,51:1843022,52:1846121,53:1849784,54:1851913,55:1854833,56:1857469,57:1859817,58:1859672,59:1857749} β
β 2022-03-14 14:00:00.000 β {0:1973499,1:2038673,2:1889800,3:1866611,4:1864982,5:1868397,6:1870355,7:1873609,8:1878854,9:1878353,10:1884231,11:1886529,12:1887418,13:1890487,14:1894295,15:1896781,16:1899235,17:1901512,18:1906641,19:1907196,20:1910120,21:1912812,22:1915795,23:1917535,24:1921423,25:1924899,26:1927412,27:1929536,28:1931833,29:1933543,30:1969748,31:1949704,32:1937275,33:1937204,34:1939046,35:1943434,36:1948582,37:1951652,38:1951330,39:1953289,40:1956900,41:1958959,42:1960718,43:1963856,44:1966952,45:1970839,46:1969867,47:1972716,48:1974749,49:1974589,50:1977219,51:1979250,52:1981908,53:1983727,54:1984699,55:1986857,56:1988738,57:1990400,58:1990058,59:1990354} β
β 2022-03-14 15:00:00.000 β {0:1374179596971150604,1:216736832572504376,2:3399704436437297448,3:1663540288323457296,4:4267786510494217524,5:2531622362380377372,6:795458214266537220,7:3399704436437297448,8:4267786510494217524,9:1663540288323457296,10:3110343745084990756,11:1952900979675763988,12:2242261671028070680,13:1663540288323457296,14:1084818905618843912,15:506097522914230528,16:4267786510494217524,17:3689065127789604140,18:3110343745084990756,19:2531622362380377372,20:1952900979675763988,21:1374179596971150604,22:795458214266537220,23:795458214266537220,24:216736832572504376,25:3978425819141910832,26:506097522914230528,27:3110343745084990756,28:1374179596971150604,29:3978425819141910832,30:2242261671028070680,31:506097522914230528,32:216736832572504376,33:3978425819141910832,34:3399704436437297448,35:2820983053732684064,36:2242261671028070680,37:2242261671028070680,38:1663540288323457296,39:1084818905618843912,40:506097522914230528,41:4267786510494217524,42:3689065127789604140,43:3110343745084990756,44:3399704436437297448,45:2820983053732684064,46:2531622362380377372,47:1952900979675763988,48:795458214266537220,49:3978425819141910832,50:1084818905618843912,51:3689065127789604140,52:1952900979675763988,53:216736832572504376,54:2820983053732684064,55:3689065127789604140,56:1084818905618843912,57:2531622362380377372,58:1374179596971150604,59:2820983053732684064} β
β 2022-03-14 16:00:00.000 β {0:2262290,1:2349170,2:2174879,3:2134685,4:2131627,5:2136194,6:2138389,7:2142454,8:2144920,9:2148797,10:2156529,11:2159239,12:2164067,13:2168475,14:2172825,15:2176691,16:2180780,17:2183832,18:2190913,19:2191753,20:2197542,21:2200060,22:2202295,23:2206060,24:2210022,25:2213551,26:2216838,27:2219448,28:2223532,29:2225276,30:2247946,31:2290295,32:2230129,33:2229426,34:2230748,35:2235986,36:2237546,37:2240905,38:2244287,39:2246736,40:2249971,41:2252959,42:2257151,43:2261020,44:2264851,45:2269398,46:2267360,47:2273254,48:2271068,49:2273006,50:2273937,51:2275677,52:2278537,53:2281759,54:2285372,55:2290137,56:2288032,57:2289823,58:2290632,59:2281998} β
β 2022-03-14 17:00:00.000 β {0:2342135,1:2470707,2:2358015,3:2286061,4:2282772,5:2282175,6:2286603,7:2289242,8:2292591,9:2294802,10:2299370,11:2302519,12:2305614,13:2311185,14:2314095,15:2321138,16:2322415,17:2324769,18:2328666,19:2331253,20:2341946,21:2338189,22:2341113,23:2344412,24:2347824,25:2351667,26:2354843,27:2355172,28:2358307,29:2357599,30:2389072,31:2397644,32:2362451,33:2357039,34:2357853,35:2361892,36:2364202,37:2366512,38:2368998,39:2371447,40:2373504,41:2372957,42:2380379,43:2380959,44:2381836,45:2384311,46:2384152,47:2384083,48:2384127,49:2384485,50:2387646,51:2387461,52:2386520,53:2390678,54:2391310,55:2395460,56:2391889,57:2391053,58:2388713,59:2378835}
..........
```
As you can see from the screenshot , the result of hour 2022-03-14 15:00:00.000 values are strangely huge like 4267786510494217524 , 3689065127789604140 . But when I narrow down search range from 2 days to 3 hours
```
SELECT
interval_startTimeMs,
sumMap(CAST(arrayMap(x -> (x, 1), switch_pcpBuckets1Min), 'Map(UInt8,Int64)'))
FROM test_all
WHERE notEmpty(switch_pcpBuckets1Min) AND (interval_startTimeMs >= '2022-03-14 13:00:00.000') AND (interval_startTimeMs < '2022-03-14 16:00:00.000')
GROUP BY interval_startTimeMs
ORDER BY interval_startTimeMs ASC
Query id: f0619e61-a3bd-410c-b363-c827250e9f75
βββββinterval_startTimeMsββ¬βsumMap(CAST(arrayMap(lambda(tuple(x), tuple(x, 1)), switch_pcpBuckets1Min), 'Map(UInt8,Int64)'))ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2022-03-14 13:00:00.000 β {0:1888498,1:1806408,2:1738233,3:1714562,4:1716166,5:1718846,6:1721077,7:1723416,8:1726652,9:1729414,10:1732370,11:1735659,12:1739089,13:1743080,14:1746563,15:1750973,16:1752769,17:1756472,18:1759138,19:1762198,20:1769251,21:1767879,22:1770599,23:1775166,24:1776618,25:1779658,26:1782262,27:1785372,28:1788181,29:1791996,30:1806137,31:1828201,32:1799067,33:1798101,34:1799876,35:1803645,36:1806954,37:1809457,38:1812673,39:1814831,40:1818984,41:1821903,42:1824868,43:1826708,44:1829861,45:1832978,46:1836875,47:1835713,48:1837610,49:1838824,50:1840808,51:1843022,52:1846121,53:1849784,54:1851913,55:1854833,56:1857469,57:1859817,58:1859672,59:1857749} β
β 2022-03-14 14:00:00.000 β {0:1973499,1:2038673,2:1889800,3:1866611,4:1864982,5:1868397,6:1870355,7:1873609,8:1878854,9:1878353,10:1884231,11:1886529,12:1887418,13:1890487,14:1894295,15:1896781,16:1899235,17:1901512,18:1906641,19:1907196,20:1910120,21:1912812,22:1915795,23:1917535,24:1921423,25:1924899,26:1927412,27:1929536,28:1931833,29:1933543,30:1969748,31:1949704,32:1937275,33:1937204,34:1939046,35:1943434,36:1948582,37:1951652,38:1951330,39:1953289,40:1956900,41:1958959,42:1960718,43:1963856,44:1966952,45:1970839,46:1969867,47:1972716,48:1974749,49:1974589,50:1977219,51:1979250,52:1981908,53:1983727,54:1984699,55:1986857,56:1988738,57:1990400,58:1990058,59:1990354} β
β 2022-03-14 15:00:00.000 β {0:2143412,1:2148294,2:2015013,3:1992616,4:1989841,5:1993092,6:1995793,7:1995933,8:1998535,9:2000326,10:2002517,11:2006287,12:2009980,13:2013617,14:2016581,15:2022306,16:2021699,17:2025477,18:2027663,19:2029937,20:2032646,21:2035411,22:2037660,23:2040148,24:2042953,25:2045425,26:2050923,27:2051131,28:2053060,29:2053878,30:2077570,31:2079524,32:2061122,33:2060250,34:2065343,35:2068150,36:2068601,37:2071234,38:2073857,39:2076418,40:2080423,41:2082912,42:2085963,43:2092586,44:2092529,45:2096102,46:2096640,47:2098708,48:2100604,49:2102188,50:2105055,51:2106478,52:2109066,53:2113995,54:2120969,55:2121535,56:2124629,57:2126631,58:2129707,59:2124955} β
βββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
3 rows in set. Elapsed: 0.257 sec. Processed 31.75 million rows, 1.90 GB (123.41 million rows/s., 7.38 GB/s.)
```
The result of hour 2022-03-14 15:00:00.000 change back to right value.
The sql of create table looks like
```
CREATE TABLE test ()
...
...
interval_startTimeMs DateTime64,
switch_pcpBuckets1Min Array(Int32),
...
)ENGINE = MergeTree
PARTITION BY toYYYYMMDD(interval_startTimeMs)
ORDER BY (interval_startTimeMs....
```
ClickHouse version :
SELECT version()
Query id: f03f4927-6719-4a92-829a-4732b0657a84
ββversion()ββ
β 22.2.2.1 β
βββββββββββββ
For now I'm thinking this as a bug for sumMap , not sure if I'm right.
Thanks.
| https://github.com/ClickHouse/ClickHouse/issues/35359 | https://github.com/ClickHouse/ClickHouse/pull/38748 | 2577b59f4c94e87ed57dce9706d53f15f36b2f29 | 1ee752b9a599dffea4c5d3d20aa7e1f90c9ae381 | "2022-03-17T11:46:26Z" | c++ | "2022-07-03T16:32:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,283 | ["contrib/arrow-cmake/CMakeLists.txt"] | ArrowStream inputData compression over gRPC broken in v22.2.2.1+ | Using compression in ArrowStream format using the InputData field of the gRPC interface has stopped working in v22.2.2.1 and v22.2.3.5
**Does it reproduce on recent release?**
yes
**Enable crash reporting**
n/a
**How to reproduce**
* using docker container artifacts `docker.io/clickhouse/clickhouse:22.2.2.1` and friends
* using gRPC
* ddl like: `CREATE TABLE IF NOT EXISTS mytab (id Int64, name String) ENGINE=Null`
* repro:
```go
package main
import (
"bytes"
"context"
"crypto/tls"
"fmt"
"github.com/apache/arrow/go/arrow"
"github.com/apache/arrow/go/arrow/array"
"github.com/apache/arrow/go/arrow/ipc"
"github.com/apache/arrow/go/arrow/memory"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
// this is the import for the transpiled grpc protobufs.
ch "myrepo/clickhouse-grpc"
)
const (
username = "root"
password = "password"
db = "default"
address = "127.0.0.1:9100"
)
func main() {
conn, err := grpc.Dial(address, grpc.WithTransportCredentials(credentials.NewTLS(&tls.Config{InsecureSkipVerify: true})))
if err != nil {
panic(err)
}
var (
schema = arrow.NewSchema(
[]arrow.Field{
{Name: "id", Type: arrow.PrimitiveTypes.Int64},
{Name: "name", Type: &arrow.StringType{}},
},
nil,
)
memalloc = memory.NewGoAllocator()
buf = &bytes.Buffer{}
b = array.NewRecordBuilder(memalloc, schema)
w = ipc.NewWriter(
buf,
ipc.WithAllocator(memalloc),
ipc.WithSchema(schema),
ipc.WithCompressConcurrency(10),
ipc.WithZstd(), // Note here the ZSTD compression
)
)
for i := int64(0); i < 100; i++ {
b.Field(0).(*array.Int64Builder).Append(i)
b.Field(1).(*array.StringBuilder).Append(fmt.Sprintf("row for %d", i))
}
rec := b.NewRecord()
w.Write(rec)
rec.Release()
if res, err := ch.NewClickHouseClient(conn).ExecuteQuery(context.Background(), &ch.QueryInfo{
InputData: buf.Bytes(),
UserName: username,
Password: password,
Database: db,
Query: "INSERT INTO mytab FORMAT ArrowStream",
}); err != nil {
panic(err)
} else if res.Exception != nil {
panic(res.Exception.String())
}
}
```
**Expected behavior**
Query should succeed. This was the case pre v22.2.2.1
**Error message and/or stacktrace**
Server responds with error, stacktrace:
```
2022.03.14 21:35:59.249424 [ 448 ] {8342226a-d909-48a6-be40-069dfad9097b} <Error> GRPCServer: Code: 33. DB::ParsingException: Error while reading batch of Arrow data: NotImplemented: Support for codec 'zstd' not built: While executing ArrowBlockInputFormat. (CANNOT_READ_ALL_DATA), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xaebed1a in /usr/bin/clickhouse
1. DB::ParsingException::ParsingException<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0x14454764 in /usr/bin/clickhouse
2. DB::ArrowBlockInputFormat::generate() @ 0x15a71e47 in /usr/bin/clickhouse
3. DB::ISource::tryGenerate() @ 0x15a40dd5 in /usr/bin/clickhouse
4. DB::ISource::work() @ 0x15a4099a in /usr/bin/clickhouse
5. DB::ExecutionThreadContext::executeTask() @ 0x15a60ca3 in /usr/bin/clickhouse
6. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x15a54b7e in /usr/bin/clickhouse
7. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x15a54280 in /usr/bin/clickhouse
8. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x15a6542c in /usr/bin/clickhouse
9. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x15a656ec in /usr/bin/clickhouse
10. ? @ 0x1577c814 in /usr/bin/clickhouse
11. ? @ 0x15778e09 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xaf62837 in /usr/bin/clickhouse
13. ? @ 0xaf662fd in /usr/bin/clickhouse
14. ? @ 0x7fbf05768609 in ?
15. clone @ 0x7fbf0568f293 in ?
```
**Additional context**
In the new gRPC protobuf changes I see there were changes to how compression is denoted, but when I try to set `InputCompressionType: "zstd"` in QueryInfo, it still fails - albeit slightly differently:
```
2022.03.14 21:45:17.712642 [ 448 ] {f43d253e-85f6-4674-9e84-3f039d113020} <Error> GRPCServer: Code: 561. DB::Exception: Zstd stream encoding failed: error 'Unknown frame descriptor'; zstd version: 1.5.0: While executing ArrowBlockInputFormat. (ZSTD_DECODER_FAILED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xaebed1a in /usr/bin/clickhouse
1. DB::Exception::Exception<char const*, char const (&) [6]>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, char const*&&, char const (&) [6]) @ 0x11cb6517 in /usr/bin/clickhouse
2. DB::ZstdInflatingReadBuffer::nextImpl() @ 0x11cb6444 in /usr/bin/clickhouse
3. DB::ReadBuffer::readBig(char*, unsigned long) @ 0xaed4309 in /usr/bin/clickhouse
4. DB::ArrowInputStreamFromReadBuffer::Read(long, void*) @ 0x15a772d1 in /usr/bin/clickhouse
5. arrow::ipc::DecodeMessage(arrow::ipc::MessageDecoder*, arrow::io::InputStream*) @ 0x19dd0f01 in /usr/bin/clickhouse
6. arrow::ipc::InputStreamMessageReader::ReadNextMessage() @ 0x19dd635f in /usr/bin/clickhouse
7. arrow::ipc::RecordBatchStreamReaderImpl::Open(std::__1::unique_ptr<arrow::ipc::MessageReader, std::__1::default_delete<arrow::ipc::MessageReader> >, arrow::ipc::IpcReadOptions const&) @ 0x19d813e1 in /usr/bin/clickhouse
8. arrow::ipc::RecordBatchStreamReader::Open(std::__1::unique_ptr<arrow::ipc::MessageReader, std::__1::default_delete<arrow::ipc::MessageReader> >, arrow::ipc::IpcReadOptions const&) @ 0x19d8119d in /usr/bin/clickhouse
9. ? @ 0x15a72c62 in /usr/bin/clickhouse
10. DB::ArrowBlockInputFormat::prepareReader() @ 0x15a72108 in /usr/bin/clickhouse
11. DB::ArrowBlockInputFormat::generate() @ 0x15a7180e in /usr/bin/clickhouse
12. DB::ISource::tryGenerate() @ 0x15a40dd5 in /usr/bin/clickhouse
13. DB::ISource::work() @ 0x15a4099a in /usr/bin/clickhouse
14. DB::ExecutionThreadContext::executeTask() @ 0x15a60ca3 in /usr/bin/clickhouse
15. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x15a54b7e in /usr/bin/clickhouse
16. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x15a54280 in /usr/bin/clickhouse
17. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x15a6542c in /usr/bin/clickhouse
18. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x15a656ec in /usr/bin/clickhouse
19. ? @ 0x1577c814 in /usr/bin/clickhouse
20. ? @ 0x15778e09 in /usr/bin/clickhouse
21. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xaf62837 in /usr/bin/clickhouse
22. ? @ 0xaf662fd in /usr/bin/clickhouse
23. ? @ 0x7fbf05768609 in ?
24. clone @ 0x7fbf0568f293 in ?
```
Note this is not only with Zstd, but also behaves very similarly with LZ4. | https://github.com/ClickHouse/ClickHouse/issues/35283 | https://github.com/ClickHouse/ClickHouse/pull/35486 | d0bb66604c0173d8352a8d951a8179510b891008 | fe5afa6b3759affe6ce4a317e36b5c84527dba09 | "2022-03-14T21:55:39Z" | c++ | "2022-05-09T11:41:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,238 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayFirstLast.cpp", "tests/queries/0_stateless/02241_array_first_last_or_null.reference", "tests/queries/0_stateless/02241_array_first_last_or_null.sql"] | ArrayFirstOrDefault function | For example, we have table like this

and i run query like this
```
select arrayFirst((i, v)->(v = 'a'), ids, values) as a_id,
arrayFirst((i, v)->(v = 'c'), ids, values) as c_id
```

this result is not right for me, i need to write smth like
```
select if(arrayCount((i, v)->(v = 'a'), ids, values) > 0, arrayFirst((i, v)->(v = 'a'), ids, values), null) as a_id,
if(arrayCount((i, v)->(v = 'c'), ids, values) > 0, arrayFirst((i, v)->(v = 'c'), ids, values), null) as c_id
```
and will get right result

so, can we get function smth like `arrayFirstOrDefault `or `arrayFirstOrNull`?
```
select arrayFirstOrNull((i, v)->(v = 'a'), ids, values) as a_id,
arrayFirstOrNull((i, v)->(v = 'c'), ids, values) as c_id
```
| https://github.com/ClickHouse/ClickHouse/issues/35238 | https://github.com/ClickHouse/ClickHouse/pull/35414 | 021f6b8c21872142a51956ee6e73f94be8f68eee | 9a4686adac98ab97fab3e2cda052cbff16e108c3 | "2022-03-12T10:01:06Z" | c++ | "2022-03-18T23:03:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,192 | ["src/Storages/MergeTree/checkDataPart.cpp", "tests/queries/0_stateless/02235_check_table_sparse_serialization.reference", "tests/queries/0_stateless/02235_check_table_sparse_serialization.sql"] | There is no column a in serialization infos | cc @CurtizJ
STR:
```sql
create table test (a UInt8) ENGINE = MergeTree Order By (a) SETTINGS ratio_of_defaults_for_sparse_serialization = 0.9
insert into test values ('1')
SET check_query_single_value_result = 0;
check table test
```
the check table command output is
```
There is no column a in serialization infos
```
version- `22.2.3.1`
(reproducible on master too - just downloaded macos build `22.3.1.1`)
| https://github.com/ClickHouse/ClickHouse/issues/35192 | https://github.com/ClickHouse/ClickHouse/pull/35274 | 4e2ec66b92b5338d00d1ef92c05d4c1dc103a5a6 | fbb1ebd9b87306f91adbc2163c8987107e51a8f9 | "2022-03-10T17:43:12Z" | c++ | "2022-03-14T20:56:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,188 | ["programs/server/play.html"] | Play UI Could Be More Clear About Showing it is in the Middle of a Long Running Query | **Describe the issue**
Context when I noticed it: I was following the bootcamp and got to the insert data step https://clickhouse.com/learn/lessons/bootcamp/lab/#3-insert-data
Here's a screenshot of what it looked like while my long query ran
<img width="1535" alt="Screen Shot 2022-03-10 at 11 48 32 AM" src="https://user-images.githubusercontent.com/31216945/157714297-292f1e42-2d8b-4b29-90f5-bbb257c98dfe.png">
When the Play UI is issuing a long-running query, it isn't very clear that that's happening. The only reaction the UI has to a long-running query is to show a subtle, non-animated status indicator as far as I can tell
**Expected behavior**
I don't think it's my place to prescribe how to change the UI, but there are a few strategies I've seen in other UIs that could be applied here.
1) The hour glass indicator icon could be animated to draw the user's attention and the motion gives users comfort that things are working and not crashed.
2) The `Run` button could be disabled so that the user doesn't try to send another query before the first one finishes.
3) The results pane could clear out when the new query is sent to show it's waiting for new results to show. Results pane could also be set to show a message like `Query running...` | https://github.com/ClickHouse/ClickHouse/issues/35188 | https://github.com/ClickHouse/ClickHouse/pull/35860 | 97f29ac0fea4c7e414f59123f6ebd7c766b70474 | a7c6d78ee1bc56a72a84f32dfcec0ee9a469533c | "2022-03-10T17:07:01Z" | c++ | "2022-04-01T23:54:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,156 | ["src/Functions/CastOverloadResolver.h", "tests/queries/0_stateless/02316_cast_to_ip_address_default_column.reference", "tests/queries/0_stateless/02316_cast_to_ip_address_default_column.sql"] | CAST to IPv6 on empty strings changed between 21.x and 22.x | **Describe the issue**
On empty strings, the behaviour of the function `CAST('', 'IPv6')` changed between the versions 21.x and 22.x
**How to reproduce**
On 21.8.14.5:
```
SELECT CAST('', 'IPv6');
Returns:
::
```
On 22.2.2.1:
```
SELECT CAST('', 'IPv6');
Triggers error:
Code: 441. DB::Exception: Invalid IPv6 value.: While processing CAST('', 'IPv6'). (CANNOT_PARSE_DOMAIN_VALUE_FROM_STRING) (version 22.2.2.1)
```
**Additional context**
This can prevent some users from upgrading to 22.x when the use of this function is spread across a number of table definitions and environments. The following workaround `CAST(toFixedString('', 16), 'IPv6')` works but the manual task of updating all the occurrences is challenging.
| https://github.com/ClickHouse/ClickHouse/issues/35156 | https://github.com/ClickHouse/ClickHouse/pull/37761 | 929ab8202483c62b1d831464d9fe74c79f772a19 | 2ac5f5bc60991bf8608d3f35754562dfdced62bf | "2022-03-09T14:55:44Z" | c++ | "2022-06-08T21:22:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,153 | ["src/Storages/Kafka/StorageKafka.cpp", "tests/integration/test_storage_kafka/test.py"] | kafka_num_consumers setting doesn't work without kafka_thread_per_consumer | I have Kafka engine table, MV and destination MT table.
Kafka table create query:
```
CREATE TABLE abc.Kafka_TestSource (...)
ENGINE = Kafka
SETTINGS kafka_broker_list = '...'
, format_avro_schema_registry_url = '...'
, kafka_topic_list = '...'
, kafka_group_name = 'Test_KafkaToCH'
, kafka_format = 'AvroConfluent'
, kafka_num_consumers = 1
, kafka_thread_per_consumer = 0
```
Due to Grafana consumer group stats dashboard it reads the topic with average speed ~11k records per second.
I recreate Kafka table with `kafka_num_consumers = 9` (kafka topic has 9 partitions) and it still reads with the same speed ~11k rps, **system.metrics** also shows that only 1 pool is being used:
```
SELECT
m.metric,
m.value
FROM system.metrics AS m
WHERE m.metric LIKE 'Background%'
Query id: 831a9285-e91b-45a4-8a1b-6b536d6860fe
ββmetricβββββββββββββββββββββββββββββββββββ¬βvalueββ
β BackgroundMergesAndMutationsPoolTask β 4 β
β BackgroundFetchesPoolTask β 0 β
β BackgroundCommonPoolTask β 0 β
β BackgroundMovePoolTask β 0 β
β BackgroundSchedulePoolTask β 0 β
β BackgroundBufferFlushSchedulePoolTask β 0 β
β BackgroundDistributedSchedulePoolTask β 0 β
β BackgroundMessageBrokerSchedulePoolTask β 1 β
βββββββββββββββββββββββββββββββββββββββββββ΄ββββββββ
```
If I set `kafka_thread_per_consumer = 1` then **BackgroundMessageBrokerSchedulePoolTask** in **system.metrics** raises up to 9 and read speed in Grafana raises up to 100 records per second:

1. `kafka_num_consumers = 1, kafka_thread_per_consumer = 0`
2. `kafka_num_consumers = 9, kafka_thread_per_consumer = 0`
3. `kafka_num_consumers = 9, kafka_thread_per_consumer = 1`
Why `kafka_num_consumers` doesn't work without `kafka_thread_per_consumer`?
ClickHouse 21.11.4.14. | https://github.com/ClickHouse/ClickHouse/issues/35153 | https://github.com/ClickHouse/ClickHouse/pull/35973 | 2f11c5323e30f910fda56992cc5cd7a84a4165e1 | ac74757f923402d67afa6202a7d8b46b9d2eedce | "2022-03-09T12:32:19Z" | c++ | "2022-04-11T13:30:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,131 | ["src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp", "tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference", "tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql", "tests/queries/0_stateless/01930_optimize_skip_unused_shards_rewrite_in.reference"] | optimize_skip_unused_shards + Int32 + negative ids from different shards | ```sql
CREATE TABLE local_test_int32 ( id String, id_hash Int32) ENGINE = MergeTree ORDER BY (id_hash, id);
create table test_int32 as local_test_int32
ENGINE = Distributed('test_cluster_two_shards' , currentDatabase(), local_test_int32, id_hash);
SELECT count()
FROM test_int32
PREWHERE id_hash IN ( -2146278902,-2147452843)
SETTINGS optimize_skip_unused_shards = 1;
Received exception from server (version 22.2.2):
Code: 92. DB::Exception: Received from localhost:9000.
DB::Exception: Cannot infer type of an empty tuple: While processing tuple(). (EMPTY_DATA_PASSED)
```
@azat
```sql
select arrayJoin([-2146278902,-2147452843]) % 2;
ββmodulo(arrayJoin([-2146278902, -2147452843]), 2)ββ
β 0 β
β -1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
```sql
-- but it's OK if ids are positive
SELECT arrayJoin([2146278902, 2147452843]) % 2
ββmodulo(arrayJoin([2146278902, 2147452843]), 2)ββ
β 0 β
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT count()
FROM test_int32
PREWHERE id_hash IN ( 2146278902,2147452843)
SETTINGS optimize_skip_unused_shards = 1;
0 rows in set. Elapsed: 0.004 sec.
-- and it's OK if ids are negative and shard is the same
SELECT arrayJoin([-2146278903, -2147452843]) % 2
ββmodulo(arrayJoin([-2146278903, -2147452843]), 2)ββ
β -1 β
β -1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT count()
FROM test_int32
PREWHERE id_hash IN ( -2146278903, -2147452843)
0 rows in set. Elapsed: 0.004 sec.
```
No issue with earlier versions (for example 20.8.17.25) | https://github.com/ClickHouse/ClickHouse/issues/35131 | https://github.com/ClickHouse/ClickHouse/pull/35134 | 15e4978a3f622164094d31ca55e314a81c6033f4 | 58e53b06a6875efc748fb0b25eed8f2a5b2a3841 | "2022-03-08T18:40:29Z" | c++ | "2022-03-10T20:12:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,128 | ["src/Common/mysqlxx/PoolWithFailover.cpp"] | MySQL errors are unclear if password / user are incorrect. | ```sql
apt-get install mysql-server
create database db;
create table db.test(a Int);
CREATE USER 'myuser'@'%' IDENTIFIED BY 'mypass';
GRANT ALL ON *.* TO 'myuser'@'%';
FLUSH PRIVILEGES;
```
mysql client shows different error messages in case the wrong password/user VS host:port
```
# mysql -h 127.0.0.1 -u myuser --password=mypass1
ERROR 1045 (28000): Access denied for user 'myuser'@'localhost' (using password: YES)
mysql -h 127.0.0.1 --port 9999 -u myuser --password=mypass
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:9999' (111)
```
clickhouse shows the same error message
```
select * from mysql('127.0.0.1', 'db', 'test', 'myuser1', 'mypass');
DB::Exception: Exception: Connections to all replicas failed: [email protected]:3306 as user myuser1. (POCO_EXCEPTION)
select * from mysql('127.0.0.1:9999', 'db', 'test', 'myuser', 'mypass');
DB::Exception: Exception: Connections to all replicas failed: [email protected]:9999 as user myuser. (POCO_EXCEPTION
``` | https://github.com/ClickHouse/ClickHouse/issues/35128 | https://github.com/ClickHouse/ClickHouse/pull/35234 | dc205d44da22f1cd5c1cd306b0ac9ba62d17d45d | 5c66030b465342061665f3726e39c4be2645f8b0 | "2022-03-08T15:48:54Z" | c++ | "2022-03-18T20:41:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,121 | ["docs/_includes/install/deb.sh"] | Unable to install under Ubuntu | `apt-get update` with new repo `packages.clickhouse.com` causes error
```
ubuntu@ch:~$ sudo apt-get update
Hit:1 http://nova.clouds.archive.ubuntu.com/ubuntu hirsute InRelease
Hit:2 http://security.ubuntu.com/ubuntu hirsute-security InRelease
Hit:3 http://nova.clouds.archive.ubuntu.com/ubuntu hirsute-updates InRelease
Get:4 https://packages.clickhouse.com/deb stable InRelease [4404 B]
Hit:5 http://nova.clouds.archive.ubuntu.com/ubuntu hirsute-backports InRelease
Fetched 4404 B in 1s (6118 B/s)
Reading package lists... Done
W: Skipping acquire of configured file 'main//binary-amd64/Packages' as repository 'https://packages.clickhouse.com/deb stable InRelease' doesn't have the component 'main/' (component misspelt in sources.list?)
W: Skipping acquire of configured file 'main//i18n/Translation-en' as repository 'https://packages.clickhouse.com/deb stable InRelease' doesn't have the component 'main/' (component misspelt in sources.list?)
W: Skipping acquire of configured file 'main//cnf/Commands-amd64' as repository 'https://packages.clickhouse.com/deb stable InRelease' doesn't have the component 'main/' (component misspelt in sources.list?)
```
the old repo (`repo.clickhouse.com`) is fine, but it does not have the latest version (22.2.3.5) | https://github.com/ClickHouse/ClickHouse/issues/35121 | https://github.com/ClickHouse/ClickHouse/pull/35125 | a871036361ea8e57660ecd88f1da5dea29b5ebf4 | b1b10d72089eeec62d99ecd75120960edf2572d9 | "2022-03-08T13:14:22Z" | c++ | "2022-03-08T16:06:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,117 | ["src/Functions/ReplaceRegexpImpl.h", "tests/queries/0_stateless/02151_replace_regexp_all_empty_match_alternative.reference", "tests/queries/0_stateless/02151_replace_regexp_all_empty_match_alternative.sql"] | Bug in `replaceRegexpAll` | ```sql
SELECT replaceRegexpAll('a', 'z*', '')
```
```
ββreplaceRegexpAll('a', 'z*', '')ββ
β β
βββββββββββββββββββββββββββββββββββ
```
```sql
SELECT replaceRegexpAll('aaaa', 'z*', '')
```
```
ββreplaceRegexpAll('aaaa', 'z*', '')ββ
β aaa β
ββββββββββββββββββββββββββββββββββββββ
```
Please create another issue.
_Originally posted by @alexey-milovidov in https://github.com/ClickHouse/ClickHouse/issues/35046#issuecomment-1061220789_ | https://github.com/ClickHouse/ClickHouse/issues/35117 | https://github.com/ClickHouse/ClickHouse/pull/35182 | 585a9edd32e2f9daa19319ad3c28d1af92005a19 | 38fa55fff04bc270cba86e2a2ae67a0d94f12162 | "2022-03-08T09:36:46Z" | c++ | "2022-03-11T22:36:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,076 | ["src/Functions/filesystem.cpp", "tests/fuzz/dictionaries/functions.dict", "tests/queries/0_stateless/00824_filesystem.sql", "tests/queries/0_stateless/02345_filesystem_local.sh", "tests/queries/0_stateless/02415_all_new_functions_must_be_documented.reference", "tests/queries/0_stateless/02457_filesystem_function.reference", "tests/queries/0_stateless/02457_filesystem_function.sql"] | `filesystemAvailable` and similar functions should take optional argument with disk name. | When disk name is not specified, it should use the default disk.
They should return max UInt64 for "infinite" virtual filesystems.
**Use case**
ClickHouse cloud. | https://github.com/ClickHouse/ClickHouse/issues/35076 | https://github.com/ClickHouse/ClickHouse/pull/42064 | 581e57be9fa4ee68e507fe0eb4f33bf787423119 | 5532e3db95550d383bb965cbe8cb8cb51c54165d | "2022-03-05T22:27:56Z" | c++ | "2022-11-21T18:23:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,970 | ["src/Functions/FunctionsExternalDictionaries.h", "tests/queries/0_stateless/02231_hierarchical_dictionaries_constant.reference", "tests/queries/0_stateless/02231_hierarchical_dictionaries_constant.sql"] | HIERARCHICAL dictionary does not support constant keys. | 22.2.2
```sql
create table source (a UInt64, b UInt64, c String) Engine= Memory;
insert into source select number, number, 'attr-'||toString(number) from numbers(10);
CREATE DICTIONARY dict1
( a UInt64 , b UInt64 HIERARCHICAL, c String
) PRIMARY KEY a
SOURCE(clickhouse(DB 'default' TABLE 'source'))
LIFETIME(300) LAYOUT(flat());
SELECT dictGetDescendants('dict1', toUInt64(2), 0)
DB::Exception: Illegal type of third argument of function dictGetDescendants
Expected const unsigned integer.: While processing dictGetDescendants('dict1', toUInt64(2), 0). (ILLEGAL_TYPE_OF_ARGUMENT)
-- but !
SELECT dictGetDescendants('dict1', materialize(toUInt64(2)), 0)
ββdictGetDescendants('dict1', materialize(toUInt64(2)), 0)ββ
β [2] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT dictGetDescendants('dict1', number, 0)
FROM numbers(3)
ββdictGetDescendants('dict1', number, 0)ββ
β [0] β
β [1] β
β [2] β
ββββββββββββββββββββββββββββββββββββββββββ
```
> "DB::Exception: Illegal type of third argument of function dictGetDescendants"
Also the message is incorrect, the problem is in the second argument, not the third argument | https://github.com/ClickHouse/ClickHouse/issues/34970 | https://github.com/ClickHouse/ClickHouse/pull/35027 | 7a9b3ae50db22b0cc21778635636960c1be62a9b | 672222740727712d89dd40ac2baa07f5c2d6d0fc | "2022-03-01T16:53:24Z" | c++ | "2022-03-04T09:45:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,953 | ["src/Compression/CompressedReadBufferFromFile.cpp", "tests/queries/0_stateless/02267_empty_arrays_read_reverse.reference", "tests/queries/0_stateless/02267_empty_arrays_read_reverse.sql"] | Attempt to read after eof | DB::Exception: Attempt to read after eof: (while reading column http_user): (while reading from part /xxxxx/01/store/b4f/b4f7cad2-943c-49b4-b8e1-44399bcc45a6/457190_82_107_4/ from mark 0 with max_rows_to_read = 8192): While executing MergeTreeReverse. (ATTEMPT_TO_READ_AFTER_EOF), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa82d07a in /home/clickhouse/bin/clickhouse-server
1. DB::throwReadAfterEOF() @ 0xa83e4db in /home/clickhouse/bin/clickhouse-server
2. ? @ 0xa87c16b in /home/clickhouse/bin/clickhouse-server
3. ? @ 0x1328c4fc in /home/clickhouse/bin/clickhouse-server
4. DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x132536f5 in /home/clickhouse/bin/clickhouse-server
5. DB::SerializationArray::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x1325f4d1 in /home/clickhouse/bin/clickhouse-server
6. DB::MergeTreeReaderWide::readData(DB::NameAndTypePair const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, bool, unsigned long, unsigned long, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >&, bool) @ 0x143a5f0f in /home/clickhouse/bin/clickhouse-server
7. DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x143a4e8b in /home/clickhouse/bin/clickhouse-server
8. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x14b1fb8e in /home/clickhouse/bin/clickhouse-server
9. DB::MergeTreeRangeReader::continueReadingChain(DB::MergeTreeRangeReader::ReadResult&, unsigned long&) @ 0x14b23c79 in /home/clickhouse/bin/clickhouse-server
10. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x14b22cd3 in /home/clickhouse/bin/clickhouse-server
11. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x14b18c08 in /home/clickhouse/bin/clickhouse-server
12. DB::MergeTreeReverseSelectProcessor::readFromPart() @ 0x14b37485 in /home/clickhouse/bin/clickhouse-server
13. DB::MergeTreeBaseSelectProcessor::generate() @ 0x14b18480 in /home/clickhouse/bin/clickhouse-server
14. DB::ISource::tryGenerate() @ 0x148414b5 in /home/clickhouse/bin/clickhouse-server
15. DB::ISource::work() @ 0x1484107a in /home/clickhouse/bin/clickhouse-server
16. DB::SourceWithProgress::work() @ 0x14a8c662 in /home/clickhouse/bin/clickhouse-server
17. DB::ExecutionThreadContext::executeTask() @ 0x14860b23 in /home/clickhouse/bin/clickhouse-server
18. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1485539e in /home/clickhouse/bin/clickhouse-server
19. ? @ 0x14856b22 in /home/clickhouse/bin/clickhouse-server
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa86f4b7 in /home/clickhouse/bin/clickhouse-server
21. ? @ 0xa872ebd in /home/clickhouse/bin/clickhouse-server
22. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
23. __clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 22.1.3.7 (official build)) | https://github.com/ClickHouse/ClickHouse/issues/34953 | https://github.com/ClickHouse/ClickHouse/pull/36215 | 791454678b27ca2c8a1180065c266840c823b257 | c76b9cc9f5c1c7454ae5992f7aedd4ed26f2dd13 | "2022-03-01T02:02:17Z" | c++ | "2022-04-14T11:51:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,948 | ["README.md", "docs/en/development/browse-code.md", "docs/en/whats-new/changelog/2020.md", "docs/ja/development/browse-code.md", "docs/ru/development/browse-code.md", "docs/zh/changelog/index.md", "docs/zh/development/browse-code.md", "docs/zh/whats-new/changelog/2020.md", "website/sitemap-static.xml"] | Woboq online code browser stopped working |
[**https://clickhouse.com/codebrowser/html_report/ClickHouse/src/index.html**
](https://clickhouse.com/codebrowser/html_report/ClickHouse/src/index.html)
returns the following message:
```xml
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Resource>/codebrowser/html_report/ClickHouse/src/index.html</Resource>
<RequestId>fd88bda5f268d124</RequestId>
</Error>
``` | https://github.com/ClickHouse/ClickHouse/issues/34948 | https://github.com/ClickHouse/ClickHouse/pull/34971 | 8c533b23820be54901fa06401e6699074bb893a2 | 9e44390974942c243963c87b5f5378114c58affd | "2022-02-28T14:31:46Z" | c++ | "2022-03-02T12:42:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,929 | ["src/Interpreters/SystemLog.cpp", "src/Storages/StorageFile.h"] | clickhouse creates lots of system log tables if config storage settings for system log table | **Describe what's wrong**
clickhouse creates lots of system log tables if config storage settings for system log table
**Does it reproduce on recent release?**
22.1
**How to reproduce**
* Which ClickHouse server version to use
22.1
* Non-default settings, if any
config query_log table as below:
```
query_log:
partition_by:
"@remove": 1
engine: |
ENGINE = MergeTree PARTITION BY (event_date)
ORDER BY (event_time)
TTL event_date + INTERVAL 14 DAY DELETE
SETTINGS ttl_only_drop_parts=1
```
**Expected behavior**
clickhouse should not create new system log tables if the table is not changed.
**Error message and/or stacktrace**
2022.02.26 19:53:28.039097 [ 1197451 ] {} <Debug> SystemLog (system.query_log): Existing table system.query_log for system log has obsolete or different structure. Renaming it to query_log_13.
Old: CREATE TABLE system.query_log (....) ENGINE = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + toIntervalDay(14)
New: CREATE TABLE system.query_log (....) ENGINE = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + toIntervalDay(14) SETTINGS ttl_only_drop_parts = 1
**Additional context**
in this commit https://github.com/ClickHouse/ClickHouse/commit/5fafeea76330db8037f2beddd8a4386cad54505a, v3: SETTINGS of MergeTree in old CREATE TABLE query is reset before comparing with new create query.
| https://github.com/ClickHouse/ClickHouse/issues/34929 | https://github.com/ClickHouse/ClickHouse/pull/34949 | af4362e40a995af76be449dc87728b0a1c174028 | 4b61e4795c2e18b9cd479c906542fd81a8edd97e | "2022-02-27T04:24:27Z" | c++ | "2022-03-01T10:15:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,864 | ["src/Interpreters/AsynchronousInsertQueue.cpp", "src/Interpreters/InterpreterInsertQuery.cpp", "tests/queries/0_stateless/02226_async_insert_table_function.reference", "tests/queries/0_stateless/02226_async_insert_table_function.sql"] | async_insert to table function gives exception 'Both table name and UUID are empty' | ```
create table test (x UInt64) engine=Null;
insert into function remote('127.0.0.1',default,test) values (1);
-- ok
set async_insert=1;
insert into function remote('127.0.0.1',default,test) values (1);
-- fail
```
exception
```
Received exception from server (version 22.3.1):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Both table name and UUID are empty. Stack trace:
0. ./build_docker/../contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x15bff96c in /usr/bin/clickhouse
1. ./build_docker/../src/Common/Exception.cpp:58: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa8d817a in /usr/bin/clickhouse
2. DB::StorageID::assertNotEmpty() const @ 0x10225ff1 in /usr/bin/clickhouse
3. ./build_docker/../contrib/libcxx/include/string:1444: DB::StorageID::getDatabaseName() const @ 0x131ec0d6 in /usr/bin/clickhouse
4. ./build_docker/../src/Interpreters/Context.cpp:0: DB::Context::checkAccess(DB::AccessFlags const&, DB::StorageID const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) const @ 0x12b96564 in /usr/bin/clickhouse
5. ./build_docker/../contrib/libcxx/include/vector:463: DB::AsynchronousInsertQueue::push(std::__1::shared_ptr<DB::IAST>, std::__1::shared_ptr<DB::Context const>) @ 0x12b61130 in /usr/bin/clickhouse
6. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x132fbb4e in /usr/bin/clickhouse
7. ./build_docker/../src/Interpreters/executeQuery.cpp:985: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x132fa4cd in /usr/bin/clickhouse
8. ./build_docker/../src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x13b66440 in /usr/bin/clickhouse
9. ./build_docker/../src/Server/TCPHandler.cpp:1918: DB::TCPHandler::run() @ 0x13b75759 in /usr/bin/clickhouse
10. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x15aef967 in /usr/bin/clickhouse
11. ./build_docker/../contrib/libcxx/include/memory:1397: Poco::Net::TCPServerDispatcher::run() @ 0x15aefe27 in /usr/bin/clickhouse
12. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x15c619e7 in /usr/bin/clickhouse
13. ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::ThreadImpl::runnableEntry(void*) @ 0x15c5f3c6 in /usr/bin/clickhouse
14. ? @ 0x7f742d0b7609 in ?
15. clone @ 0x7f742cfde293 in ?
. (UNKNOWN_TABLE)
``` | https://github.com/ClickHouse/ClickHouse/issues/34864 | https://github.com/ClickHouse/ClickHouse/pull/34866 | ef9bf92a286a0b939fc6e28a70549b49e274afa0 | 82d24f06eb256597a5995938479bd982b8e47a2a | "2022-02-24T08:51:39Z" | c++ | "2022-03-01T16:45:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,812 | ["src/Formats/CMakeLists.txt", "src/Formats/configure_config.cmake", "src/Storages/System/CMakeLists.txt", "src/Storages/System/StorageSystemBuildOptions.generated.cpp.in"] | build_options stopped to show that some options are enabled | ```sql
ClickHouse client version 22.2.2.1.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.2.2 revision 54455.
SELECT * FROM system.build_options WHERE name LIKE 'USE%';
ββnameβββββββββββββββββββββ¬βvalueββ
β USE_EMBEDDED_COMPILER β 1 β
β USE_GLIBC_COMPATIBILITY β ON β
β USE_JEMALLOC β ON β
β USE_UNWIND β ON β
β USE_ICU β 1 β
β USE_H3 β β
β USE_MYSQL β 1 β
β USE_RDKAFKA β 1 β
β USE_CAPNP β β
β USE_BASE64 β 1 β
β USE_HDFS β 1 β
β USE_SNAPPY β 1 β
β USE_PARQUET β β
β USE_PROTOBUF β β
β USE_BROTLI β 1 β
β USE_SSL β 1 β
β USE_HYPERSCAN β ON β
β USE_SIMDJSON β 1 β
β USE_ODBC β 1 β
β USE_GRPC β 1 β
β USE_LDAP β 1 β
β USE_KRB5 β 1 β
β USE_FILELOG β 1 β
β USE_BZIP2 β 1 β
β USE_AMQPCPP β 1 β
β USE_ROCKSDB β 1 β
β USE_NURAFT β 1 β
β USE_NLP β 1 β
β USE_SQLITE β 1 β
β USE_LIBPQXX β 1 β
β USE_AZURE_BLOB_STORAGE β 1 β
β USE_AWS_S3 β 1 β
β USE_CASSANDRA β 1 β
β USE_YAML_CPP β 1 β
β USE_SENTRY β 1 β
β USE_DATASKETCHES β 1 β
β USE_AVRO β β
β USE_ARROW β β
β USE_ORC β β
β USE_MSGPACK β β
βββββββββββββββββββββββββββ΄ββββββββ
SELECT *
FROM system.build_options
WHERE (name LIKE 'USE%') AND (value = '');
ββnameββββββββββ¬βvalueββ
β USE_H3 β β
β USE_CAPNP β β
β USE_PARQUET β β
β USE_PROTOBUF β β
β USE_AVRO β β
β USE_ARROW β β
β USE_ORC β β
β USE_MSGPACK β β
ββββββββββββββββ΄ββββββββ | https://github.com/ClickHouse/ClickHouse/issues/34812 | https://github.com/ClickHouse/ClickHouse/pull/34823 | 115c0c2aba7a8bb9e24d4f1cf369d229a42bd3a9 | c037adb060aff46935373694965fc17026ed6f37 | "2022-02-22T14:48:15Z" | c++ | "2022-03-04T18:17:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,810 | ["src/Functions/in.cpp", "tests/queries/0_stateless/02226_in_untuple_issue_34810.reference", "tests/queries/0_stateless/02226_in_untuple_issue_34810.sql"] | Incorrect error message: Number of columns in section IN doesn't match. | If there is an `IN` condition with a tuple and types don't match, CH throws `DB::Exception: Number of columns in section IN doesn't match. 2 at left, 1 at right.`
**Affected versions**: From 19.17.6.36 to 22.2.2.1
**How to reproduce**
```sql
CREATE TABLE calendar
(
`year` Int64,
`month` Int64
)
ENGINE = TinyLog;
insert into calendar VALUES(2000,1),(2000,2),(2000,3);
CREATE TABLE events32
(
`year` Int32,
`month` Int32
)
ENGINE = TinyLog;
insert into events32 values (2001,2),(2001,3);
CREATE TABLE events64
(
`year` Int64,
`month` Int64
)
ENGINE = TinyLog;
insert into events64 values (2001,2),(2001,3)
```
If both sides are type of `Tuple(Int64, Int64)` it works fine:
```sql
SELECT *
FROM calendar
WHERE (year, month) IN (
SELECT (year, month)
FROM events64
)
Ok.
0 rows in set. Elapsed: 0.005 sec.
```
But if right side is type of `Tuple(Int32, Int32)` while left side is `Tuple(Int64, Int64)` then it throws `Number of columns in section IN doesn't match` which is actully not true.
```sql
SELECT *
FROM calendar
WHERE (year, month) IN (
SELECT (year, month)
FROM events32
)
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 22.2.2):
Code: 20. DB::Exception: Received from localhost:9000. DB::Exception: Number of columns in section IN doesn't match.
2 at left, 1 at right.: while executing
'FUNCTION in(tuple(year, month) :: 3, _subquery8 :: 2) -> in(tuple(year, month), _subquery8) UInt8 : 4'.
(NUMBER_OF_COLUMNS_DOESNT_MATCH)
```
**Expected behavior**
Throw expected error message, like `type mismatch` | https://github.com/ClickHouse/ClickHouse/issues/34810 | https://github.com/ClickHouse/ClickHouse/pull/34836 | 1f92fb5b40c27271c73f023f873e29744d473a6a | 9ba0cb547a5dbe06b2133456844cb3cf9844cc23 | "2022-02-22T12:19:39Z" | c++ | "2022-02-25T08:58:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,799 | ["src/Core/Settings.h", "src/Core/SettingsEnums.cpp", "src/DataTypes/Serializations/SerializationDateTime.cpp", "src/DataTypes/Serializations/SerializationDateTime64.cpp", "src/Formats/FormatSettings.h"] | date_time_input_format = 'best_effort_us' | **Use case**
Like `date_time_input_format = 'best_effort'` but with disambiguation to American mm/dd/yyyy style.
| https://github.com/ClickHouse/ClickHouse/issues/34799 | https://github.com/ClickHouse/ClickHouse/pull/34982 | 7d90afb3b0960fbf7fab9c81cd341795ce45f650 | f1b1baf56e681ae4b93de0424ea349bf2f7faae3 | "2022-02-21T22:06:35Z" | c++ | "2022-03-03T08:22:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,798 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/02455_improve_feedback_when_replacing_partition_with_different_primary_key.reference", "tests/queries/0_stateless/02455_improve_feedback_when_replacing_partition_with_different_primary_key.sql"] | Improve feedback when replacing partition with diffent primary key | **Describe the issue**
Improve feedback when trying to replace a partition with a different index
**Which ClickHouse server version to use**
21.9.5
**How to reproduce**
```
DROP DATABASE IF EXISTS test;
CREATE DATABASE test;
CREATE TABLE test.A (id UInt32, company UInt32, total UInt64) ENGINE=SummingMergeTree() PARTITION BY company PRIMARY KEY (id) ORDER BY (id, company);
INSERT INTO test.A SELECT number%10 as id, number%2 as company, count() as total FROM numbers(100) GROUP BY id,company;
CREATE TABLE test.B (id UInt32, company UInt32, total UInt64) ENGINE=SummingMergeTree() PARTITION BY company ORDER BY (id, company);
ALTER TABLE test.B REPLACE PARTITION '0' FROM test.A;
```
**Error message and/or stacktrace**
```
Received exception from server (version 21.9.5):
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Cannot read all data. Bytes read: 0. Bytes expected: 4.. (CANNOT_READ_ALL_DATA)
(query: ALTER TABLE test.B REPLACE PARTITION '0' FROM test.A;)
````
**Expected behavior**
```
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Cannot replace partition different index (CANNOT_REPLACE_PARTITION)
(query: ALTER TABLE test.B REPLACE PARTITION '0' FROM test.A;)
```
| https://github.com/ClickHouse/ClickHouse/issues/34798 | https://github.com/ClickHouse/ClickHouse/pull/41838 | 1d9e126f941f81f1839c465f1f8840d22ebd450b | 072c19ba96236a9dc3a8a613afe607c3c7cf6aa9 | "2022-02-21T15:53:39Z" | c++ | "2022-09-28T13:33:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,787 | ["src/Common/QueryProfiler.cpp"] | 22.1 and 22.2 server crashed on VirtualBox VM | I was using 50 concurrent users kept sending query `select number, toDateTime(number), toString(number) from numbers(10000)` using JDBC driver to ClickHouse http interface. After running the test for 10 minutes or so, ClickHouse server crashed leaving below error logs. I'll restart the container and re-run the test.
```bash
2022.02.21 00:30:26.676747 [ 691 ] {} <Fatal> BaseDaemon: ########################################
2022.02.21 00:30:26.682373 [ 691 ] {} <Fatal> BaseDaemon: (version 22.1.3.7 (official build), build id: D11BC54A7FE20E44) (from thread 689) (query_id: 47848c47-9c3a-4c1f-8770-dcc43974a88b) Received signal Arithmetic exception (8)
2022.02.21 00:30:26.682396 [ 691 ] {} <Fatal> BaseDaemon: Integer divide by zero.
2022.02.21 00:30:26.682403 [ 691 ] {} <Fatal> BaseDaemon: Stack trace: 0xa8671e8 0x7f69d289f3c0 0x7f69d289a376 0x175a2481 0xa8713e1 0xa871130 0x14927c5c 0x13d22da0 0x1156b348 0x1317a37e 0x13d18028 0x145a6b3a 0x145ab247 0x1480e59d 0x1745e52f 0x17460981 0x17611609 0x1760ed00 0x7f69d2893609 0x7f69d27ba293
2022.02.21 00:30:26.682436 [ 691 ] {} <Fatal> BaseDaemon: 2. ? @ 0xa8671e8 in /usr/bin/clickhouse
2022.02.21 00:30:26.682443 [ 691 ] {} <Fatal> BaseDaemon: 3. ? @ 0x7f69d289f3c0 in ?
2022.02.21 00:30:26.682448 [ 691 ] {} <Fatal> BaseDaemon: 4. pthread_cond_wait @ 0x7f69d289a376 in ?
2022.02.21 00:30:26.682462 [ 691 ] {} <Fatal> BaseDaemon: 5. Poco::EventImpl::waitImpl() @ 0x175a2481 in /usr/bin/clickhouse
2022.02.21 00:30:26.682473 [ 691 ] {} <Fatal> BaseDaemon: 6. ThreadPoolImpl<ThreadFromGlobalPool>::finalize() @ 0xa8713e1 in /usr/bin/clickhouse
2022.02.21 00:30:26.682477 [ 691 ] {} <Fatal> BaseDaemon: 7. ThreadPoolImpl<ThreadFromGlobalPool>::~ThreadPoolImpl() @ 0xa871130 in /usr/bin/clickhouse
2022.02.21 00:30:26.682484 [ 691 ] {} <Fatal> BaseDaemon: 8. DB::ParallelFormattingOutputFormat::~ParallelFormattingOutputFormat() @ 0x14927c5c in /usr/bin/clickhouse
2022.02.21 00:30:26.682490 [ 691 ] {} <Fatal> BaseDaemon: 9. ? @ 0x13d22da0 in /usr/bin/clickhouse
2022.02.21 00:30:26.682497 [ 691 ] {} <Fatal> BaseDaemon: 10. DB::SourceWithProgress::~SourceWithProgress() @ 0x1156b348 in /usr/bin/clickhouse
2022.02.21 00:30:26.682502 [ 691 ] {} <Fatal> BaseDaemon: 11. DB::QueryPipeline::reset() @ 0x1317a37e in /usr/bin/clickhouse
2022.02.21 00:30:26.682516 [ 691 ] {} <Fatal> BaseDaemon: 12. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x13d18028 in /usr/bin/clickhouse
2022.02.21 00:30:26.682535 [ 691 ] {} <Fatal> BaseDaemon: 13. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x145a6b3a in /usr/bin/clickhouse
2022.02.21 00:30:26.682540 [ 691 ] {} <Fatal> BaseDaemon: 14. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x145ab247 in /usr/bin/clickhouse
2022.02.21 00:30:26.682545 [ 691 ] {} <Fatal> BaseDaemon: 15. DB::HTTPServerConnection::run() @ 0x1480e59d in /usr/bin/clickhouse
2022.02.21 00:30:26.683736 [ 691 ] {} <Fatal> BaseDaemon: 16. Poco::Net::TCPServerConnection::start() @ 0x1745e52f in /usr/bin/clickhouse
2022.02.21 00:30:26.683755 [ 691 ] {} <Fatal> BaseDaemon: 17. Poco::Net::TCPServerDispatcher::run() @ 0x17460981 in /usr/bin/clickhouse
2022.02.21 00:30:26.683761 [ 691 ] {} <Fatal> BaseDaemon: 18. Poco::PooledThread::run() @ 0x17611609 in /usr/bin/clickhouse
2022.02.21 00:30:26.683768 [ 691 ] {} <Fatal> BaseDaemon: 19. Poco::ThreadImpl::runnableEntry(void*) @ 0x1760ed00 in /usr/bin/clickhouse
2022.02.21 00:30:26.683772 [ 691 ] {} <Fatal> BaseDaemon: 20. ? @ 0x7f69d2893609 in ?
2022.02.21 00:30:26.683776 [ 691 ] {} <Fatal> BaseDaemon: 21. __clone @ 0x7f69d27ba293 in ?
2022.02.21 00:30:26.836674 [ 691 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 38FD0E3944230CBD1E0C1028A9D68C83. There is no information about the reference checksum.
```
Update:
I wasn't able to reproduce the issue in second run, which took 50+ minutes(due to slow and unstable wifi between my laptop and VM).
Label | # Samples | Average | Median | 90% Line | 95% Line | 99% Line | Min | Max | Error % | Throughput | Received KB/sec | Sent KB/sec
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
JDBC Request | 500000 | 322 | 286 | 428 | 617 | 1007 | 25 | 15018 | 0.0 | 154.4752722163247 | 44852.43836513876 | 0.0
Update again:
Update image to 22.2 and ran into the issue:
```
2022.02.21 17:08:49.330757 [ 465 ] {} <Fatal> BaseDaemon: ########################################
2022.02.21 17:08:49.330791 [ 465 ] {} <Fatal> BaseDaemon: (version 22.2.2.1, build id: 5F3D9E4F48D4CC47) (from thread 299) (query_id: 7d79a97f-998d-400e-a4a7-2fc1a63be6fe) (query: select number, toDateTime(number), toString(number) from numbers(10000)) Received signal Arithmetic exception (8)
2022.02.21 17:08:49.330808 [ 465 ] {} <Fatal> BaseDaemon: Integer divide by zero.
2022.02.21 17:08:49.330818 [ 465 ] {} <Fatal> BaseDaemon: Stack trace: 0xaf01a08 0x7f0de90023c0 0x7f0de8ffd374 0x187ab341 0x15b27201 0x15a6cf8d 0x15a60ca3 0x15a54b7e 0x15a53ddb 0x15a53718 0x15a52244 0x14eec693 0x157a42bd 0x157a8a52 0x15a0d67b 0x18667a0f 0x18669e61 0x1881a549 0x18817c40 0x7f0de8ff6609 0x7f0de8f1d293
2022.02.21 17:08:49.330845 [ 465 ] {} <Fatal> BaseDaemon: 2. ? @ 0xaf01a08 in /usr/bin/clickhouse
2022.02.21 17:08:49.330859 [ 465 ] {} <Fatal> BaseDaemon: 3. ? @ 0x7f0de90023c0 in ?
2022.02.21 17:08:49.330865 [ 465 ] {} <Fatal> BaseDaemon: 4. pthread_cond_wait @ 0x7f0de8ffd374 in ?
2022.02.21 17:08:49.330880 [ 465 ] {} <Fatal> BaseDaemon: 5. Poco::EventImpl::waitImpl() @ 0x187ab341 in /usr/bin/clickhouse
2022.02.21 17:08:49.330892 [ 465 ] {} <Fatal> BaseDaemon: 6. DB::ParallelFormattingOutputFormat::finalizeImpl() @ 0x15b27201 in /usr/bin/clickhouse
2022.02.21 17:08:49.330898 [ 465 ] {} <Fatal> BaseDaemon: 7. DB::IOutputFormat::work() @ 0x15a6cf8d in /usr/bin/clickhouse
2022.02.21 17:08:49.330903 [ 465 ] {} <Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x15a60ca3 in /usr/bin/clickhouse
2022.02.21 17:08:49.330910 [ 465 ] {} <Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x15a54b7e in /usr/bin/clickhouse
2022.02.21 17:08:49.330914 [ 465 ] {} <Fatal> BaseDaemon: 10. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x15a53ddb in /usr/bin/clickhouse
2022.02.21 17:08:49.330919 [ 465 ] {} <Fatal> BaseDaemon: 11. DB::PipelineExecutor::execute(unsigned long) @ 0x15a53718 in /usr/bin/clickhouse
2022.02.21 17:08:49.330923 [ 465 ] {} <Fatal> BaseDaemon: 12. DB::CompletedPipelineExecutor::execute() @ 0x15a52244 in /usr/bin/clickhouse
2022.02.21 17:08:49.330937 [ 465 ] {} <Fatal> BaseDaemon: 13. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x14eec693 in /usr/bin/clickhouse
2022.02.21 17:08:49.331080 [ 465 ] {} <Fatal> BaseDaemon: 14. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x157a42bd in /usr/bin/clickhouse
2022.02.21 17:08:49.331086 [ 465 ] {} <Fatal> BaseDaemon: 15. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x157a8a52 in /usr/bin/clickhouse
2022.02.21 17:08:49.331092 [ 465 ] {} <Fatal> BaseDaemon: 16. DB::HTTPServerConnection::run() @ 0x15a0d67b in /usr/bin/clickhouse
2022.02.21 17:08:49.331099 [ 465 ] {} <Fatal> BaseDaemon: 17. Poco::Net::TCPServerConnection::start() @ 0x18667a0f in /usr/bin/clickhouse
2022.02.21 17:08:49.331103 [ 465 ] {} <Fatal> BaseDaemon: 18. Poco::Net::TCPServerDispatcher::run() @ 0x18669e61 in /usr/bin/clickhouse
2022.02.21 17:08:49.331109 [ 465 ] {} <Fatal> BaseDaemon: 19. Poco::PooledThread::run() @ 0x1881a549 in /usr/bin/clickhouse
2022.02.21 17:08:49.331113 [ 465 ] {} <Fatal> BaseDaemon: 20. Poco::ThreadImpl::runnableEntry(void*) @ 0x18817c40 in /usr/bin/clickhouse
2022.02.21 17:08:49.331117 [ 465 ] {} <Fatal> BaseDaemon: 21. ? @ 0x7f0de8ff6609 in ?
2022.02.21 17:08:49.331122 [ 465 ] {} <Fatal> BaseDaemon: 22. __clone @ 0x7f0de8f1d293 in ?
2022.02.21 17:08:49.460881 [ 465 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 03A21B8EF25D04A4DCD8C0FCA8310FDA. There is no information about the reference checksum.
``` | https://github.com/ClickHouse/ClickHouse/issues/34787 | https://github.com/ClickHouse/ClickHouse/pull/35032 | 4f665eb01d42bbd839170ea214eb18d8ef1380c3 | 83de2f66d108c20c373505aa4b0e7ae25b2478f9 | "2022-02-21T09:03:03Z" | c++ | "2022-03-10T16:49:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,785 | ["programs/client/Client.cpp", "tests/queries/0_stateless/02100_multiple_hosts_command_line_set.reference", "tests/queries/0_stateless/02100_multiple_hosts_command_line_set.sh"] | clickhouse-client doesn't load the host/port config from disk anymore | **Describe what's wrong**
Since the upgrade to `v22.2.2.1` the host/port configuration for the `clickhouse-client` is not loaded from disk anymore.
**Does it reproduce on recent release?**
Reproduces on v22.2.2.1.
**How to reproduce**
* Which ClickHouse server version to use
v22.2.2.1
* Which interface to use, if matters
`clickhouse-client`
* Non-default settings, if any
```
cat /etc/clickhouse-client/conf.d/clickhouse-local.xml
<config>
<host>12.34.56.78</host>
<port>9876</port>
</config>
```
**Expected behavior**
running `clickhouse-client` uses the `host` / `port` from the config file.
**Error message and/or stacktrace**
```
$ clickhouse-client
ClickHouse client version 22.2.2.1.
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
```
**Additional context**
After some investigation, the reason is this functionality was removed in the commit e780c1292d837b03365f4e210d3a270e66257c4f (why?) | https://github.com/ClickHouse/ClickHouse/issues/34785 | https://github.com/ClickHouse/ClickHouse/pull/34791 | 90ae785e539f3f443f97c9c308c426de73e47049 | fac232842a734cdeb71997020f38f401db4d23b7 | "2022-02-21T07:31:06Z" | c++ | "2022-02-22T08:46:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,776 | ["programs/client/Client.cpp", "tests/queries/0_stateless/02100_multiple_hosts_command_line_set.reference", "tests/queries/0_stateless/02100_multiple_hosts_command_line_set.sh"] | clickhouse-client ignores port parameter from config file, ClickHouse 22.2.2 | **Describe the issue**
In the latest version (22.2.2.1) clickhouse-client ignores `port` parameter from config file.
**How to reproduce**
Create `config.xml` with arbitrary value for `port` parameter.
```
<config>
<port>9001</port>
</config>
```
Run `clickhouse-client` with custom config file
```
# clickhouse-client --log-level trace -c config.xml
Processing configuration file 'config.xml'.
ClickHouse client version 22.2.2.1.
Connecting to localhost:9000 as user default.
Connecting. Database: (not specified). User: default. Uncompressed
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
Uninitializing subsystem: Logging Subsystem
```
The output indicates that it's used default port value (9000) instead of the value from config file (9001).
| https://github.com/ClickHouse/ClickHouse/issues/34776 | https://github.com/ClickHouse/ClickHouse/pull/34791 | 90ae785e539f3f443f97c9c308c426de73e47049 | fac232842a734cdeb71997020f38f401db4d23b7 | "2022-02-20T19:59:39Z" | c++ | "2022-02-22T08:46:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,769 | ["src/Interpreters/Cluster.cpp"] | It's inconvenient that we require to specify `<port>` in `<remote_servers>` cluster configuration. | **Describe the issue**
```
remote_servers.play.shard.replica.port
```
Should use server's TCP port by default. | https://github.com/ClickHouse/ClickHouse/issues/34769 | https://github.com/ClickHouse/ClickHouse/pull/34772 | fbcc27a339358331eed2ef4bda26b733903d8335 | 8a04ed72af3a60fbd3f3b97f7cdc3c27ebc553c0 | "2022-02-20T16:08:54Z" | c++ | "2022-03-21T13:45:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,763 | ["docker/test/performance-comparison/report.py", "tests/ci/performance_comparison_check.py"] | Performance tests should always turn red if there are "Run errors" | [Run Errors](https://s3.amazonaws.com/clickhouse-test-reports/33057/5a8cf3ac98808dadf125068a33ed9c622998a484/performance_comparison__actions__[4/4]/report.html#run-errors) | https://github.com/ClickHouse/ClickHouse/issues/34763 | https://github.com/ClickHouse/ClickHouse/pull/34797 | 9c1a06703a2f4cae38060534511ce5468d86ff87 | 0b3aac5660080373f6af54b903502675d2772276 | "2022-02-20T11:17:32Z" | c++ | "2022-05-04T18:45:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,755 | ["docker/packager/binary/build.sh", "docker/test/fuzzer/run-fuzzer.sh", "docker/test/performance-comparison/compare.sh"] | Make `clickhouse` binary a self extracting executable. | **Use case**
1. More quick downloads but without the need of unpacking or installing any separate tools.
2. Include `odbc-bridge` and `library-bridge` in single-binary ClickHouse downloads despite the fact that they are separate binaries.
3. Maybe the size will be ok to always include debug info, even in .deb/.rpm packages.
4. It will also speed up cold start in the cloud.
5. #29378
**Describe the solution you'd like**
Compile a tool that can compress and decompress with ZSTD. It can use the official zstd framing format but not necessarily. It should support compression by blocks of size around 10..100 MiB (to avoid too high memory usage) and checksums. It should not depend on glibc version, and even better if it will be statically linked. Maybe two tools - one for compression and another for decompression.
Post-build rule will compress `clickhouse` (and possibly `clickhouse-odbc-bridge` and `clickhouse-jdbc-bridge`) binary into blob and then include it into the decompressor with `llvm-objcopy` as a custom ELF section (alternatively - maybe we can concatenate it after the decompressor binary).
Decompressor should check the free space in the current directory, decompress the content into temporary file with a similar name (like `clickhouse.tmp`), perform `fsync`, rename the current binary to a file with similar name (like `.compressed`), rename the decompressed binary to the original name, delete the compressed binary and run the decompressed binary with the same arguments.
If `clickhouse-odbc-bridge` and `clickhouse-odbc-bridge` binaries are present, they should be extracted into current directory and `clickhouse install` script should check if they are present.
**Describe alternatives you've considered**
There are existing specialized tools for compressing executables (like UPX). But zstd will do it better.
**Additional context**
A shortcut for `clickhouse install` in the decompressor can be implemented to avoid extra file copies (first into current directory then to the install destination like `/usr/bin`).
It makes sense to parallelize decompression. | https://github.com/ClickHouse/ClickHouse/issues/34755 | https://github.com/ClickHouse/ClickHouse/pull/39617 | 5a82119fd0765de7b2ac1dadbbac76195befd510 | cb7f072fe88a66387a0f990920db626c1774741f | "2022-02-19T18:03:07Z" | c++ | "2022-08-08T05:19:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,742 | ["tests/ci/git_helper.py", "tests/ci/git_test.py", "tests/ci/release.py"] | Add a `patch AKA stable` release to the release script | null | https://github.com/ClickHouse/ClickHouse/issues/34742 | https://github.com/ClickHouse/ClickHouse/pull/34740 | eeea322556dc3b0c84a67caab0164e2de301d614 | c44aeda23cb26e643cabee01682688c131c364c8 | "2022-02-18T23:42:42Z" | c++ | "2022-02-22T10:39:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,724 | ["tests/queries/0_stateless/02513_insert_without_materialized_columns.reference", "tests/queries/0_stateless/02513_insert_without_materialized_columns.sh"] | Can't import data from a file into a table with materialized columns. | I can't import data from a file into a table with materialized columns.
```sql
create table test (
a Int64,
b Int64 materialized a
)
engine = MergeTree()
primary key tuple();
insert into test values (1);
select * from test into outfile '/tmp/test.native.zstd' format Native;
truncate table test;
insert into test from infile '/tmp/test.native.zstd';
-- Received exception from server (version 22.1.2):
-- Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column b in block. There are only columns: a. -- (NOT_FOUND_COLUMN_IN_BLOCK)
create table test_from_file (
a Int64
)
engine = MergeTree()
primary key tuple();
insert into test_from_file from infile '/tmp/test.native.zstd';
-- OK
insert into test select * from test_from_file;
-- OK
```
ClickHouse 22.1.2.2
The [documentation](https://clickhouse.com/docs/en/sql-reference/statements/create/table/#materialized) says it's possible. | https://github.com/ClickHouse/ClickHouse/issues/34724 | https://github.com/ClickHouse/ClickHouse/pull/44360 | 58d849d73229a73e479dcc3a426e0c5943c527dc | 21d9e7ebc3a01bded5c616cf339c6ac4c7a8df68 | "2022-02-18T12:09:32Z" | c++ | "2022-12-27T11:53:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,719 | ["src/Client/ClientBase.cpp"] | print-profile-events not always do proper summary | With ` --profile-events-delay-ms=-1` i expect to see only `[ 0 ]` (i.e. summary) rows in the output.
But for some queries, it prints also some per-thread stats.
```
clickhouse-client --query='CREATE TABLE test.aaaa Engine = MergeTree ORDER BY (ac, nw) AS SELECT toUInt64(rand(1) % 20000000) as ac, toFloat32(1) as wg, toUInt16(rand(3) % 400) as nw FROM numbers_mt(10000000)'
clickhouse-client --print-profile-events --profile-events-delay-ms=-1 --query='SELECT nw, sum(WR) AS R FROM ( SELECT AVG(wg) AS WR, ac, nw FROM test.aaaa GROUP BY ac, nw ) GROUP BY nw ORDER BY R DESC LIMIT 10'
323 25480
391 25417
314 25391
295 25390
76 25362
68 25326
203 25323
34 25318
98 25272
330 25269
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] ReadBufferFromFileDescriptorRead: 25 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] ReadBufferFromFileDescriptorReadBytes: 9774080 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] ReadCompressedBytes: 9683553 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] CompressedReadBufferBlocks: 371 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] CompressedReadBufferBytes: 24313856 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] OpenedFileCacheHits: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] IOBufferAllocs: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] IOBufferAllocBytes: 1283047 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] ArenaAllocChunks: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] ArenaAllocBytes: 33546240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] MarkCacheHits: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] CreatedReadBufferOrdinary: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] SelectedRows: 1736208 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] SelectedBytes: 24306912 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] ContextLock: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] RealTimeMicroseconds: 239654 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] UserTimeMicroseconds: 190158 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] SystemTimeMicroseconds: 47825 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] SoftPageFaults: 33870 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] OSCPUVirtualTimeMicroseconds: 237976 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] OSReadChars: 9081856 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] OSWriteChars: 10240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] QueryProfilerRuns: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] MemoryTrackerUsage: 73276641 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] Seek: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] ReadBufferFromFileDescriptorRead: 35 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] ReadBufferFromFileDescriptorReadBytes: 12901995 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] ReadCompressedBytes: 10197959 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] CompressedReadBufferBlocks: 378 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] CompressedReadBufferBytes: 24772174 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] OpenedFileCacheHits: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] IOBufferAllocs: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] IOBufferAllocBytes: 2852762 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] ArenaAllocChunks: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] ArenaAllocBytes: 33546240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] MarkCacheHits: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] CreatedReadBufferOrdinary: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] SelectedRows: 1769007 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] SelectedBytes: 24766098 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] ContextLock: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] RealTimeMicroseconds: 257804 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] UserTimeMicroseconds: 210207 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] SystemTimeMicroseconds: 43849 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] SoftPageFaults: 36042 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] OSCPUVirtualTimeMicroseconds: 254050 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] OSReadChars: 12901376 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] OSWriteChars: 11264 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] MemoryTrackerUsage: 69321981 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] FileOpen: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] ReadBufferFromFileDescriptorRead: 56 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] ReadBufferFromFileDescriptorReadBytes: 7919854 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] ReadCompressedBytes: 8442798 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] CompressedReadBufferBlocks: 345 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] CompressedReadBufferBytes: 20643840 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] OpenedFileCacheMisses: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] IOBufferAllocs: 9 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] IOBufferAllocBytes: 1383231 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] ArenaAllocChunks: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] ArenaAllocBytes: 33546240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] MarkCacheMisses: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] CreatedReadBufferOrdinary: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] DiskReadElapsedMicroseconds: 4018 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] SelectedRows: 1474188 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] SelectedBytes: 20638632 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] ContextLock: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] RealTimeMicroseconds: 254468 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] UserTimeMicroseconds: 218064 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] SystemTimeMicroseconds: 35821 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] SoftPageFaults: 31287 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] OSCPUWaitMicroseconds: 258 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] OSCPUVirtualTimeMicroseconds: 253879 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] OSReadChars: 7919616 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] OSWriteChars: 10240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] MemoryTrackerUsage: 63758681 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] FileOpen: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] ReadBufferFromFileDescriptorRead: 51 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] ReadBufferFromFileDescriptorReadBytes: 11006056 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] ReadCompressedBytes: 10066986 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] CompressedReadBufferBlocks: 396 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] CompressedReadBufferBytes: 24856720 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] OpenedFileCacheMisses: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] IOBufferAllocs: 9 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] IOBufferAllocBytes: 1382713 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] ArenaAllocChunks: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] ArenaAllocBytes: 33546240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] MarkCacheMisses: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] CreatedReadBufferOrdinary: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] DiskReadElapsedMicroseconds: 4012 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] SelectedRows: 1776007 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] SelectedBytes: 24864098 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] ContextLock: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] RealTimeMicroseconds: 267183 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] UserTimeMicroseconds: 222260 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] SystemTimeMicroseconds: 43835 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] SoftPageFaults: 35873 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] OSCPUWaitMicroseconds: 9 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] OSCPUVirtualTimeMicroseconds: 266084 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] OSReadChars: 10785792 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] OSWriteChars: 11264 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] QueryProfilerRuns: 2 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3018 ] MemoryTrackerUsage: 55609403 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 3020 ] MemoryTrackerUsage: 0 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] Seek: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] ReadBufferFromFileDescriptorRead: 61 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] ReadBufferFromFileDescriptorReadBytes: 9781329 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] ReadCompressedBytes: 10299510 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] CompressedReadBufferBlocks: 415 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] CompressedReadBufferBytes: 25231360 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] OpenedFileCacheHits: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] IOBufferAllocs: 9 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] IOBufferAllocBytes: 1386211 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] ArenaAllocChunks: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] ArenaAllocBytes: 33546240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] MarkCacheHits: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] CreatedReadBufferOrdinary: 5 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] SelectedRows: 1802922 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] SelectedBytes: 25240908 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] ContextLock: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] RealTimeMicroseconds: 266347 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] UserTimeMicroseconds: 218204 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] SystemTimeMicroseconds: 47828 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] SoftPageFaults: 35898 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] OSCPUVirtualTimeMicroseconds: 266022 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] OSWriteBytes: 4096 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] OSReadChars: 9781248 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] OSWriteChars: 10240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] MemoryTrackerUsage: 64198022 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] FileOpen: 15 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] Seek: 9 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] ReadBufferFromFileDescriptorRead: 41 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] ReadBufferFromFileDescriptorReadBytes: 8525071 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] ReadCompressedBytes: 8336159 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] CompressedReadBufferBlocks: 318 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] CompressedReadBufferBytes: 20182050 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] OpenedFileCacheMisses: 15 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] IOBufferAllocs: 30 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] IOBufferAllocBytes: 5861226 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] ArenaAllocChunks: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] ArenaAllocBytes: 33546240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] MarkCacheMisses: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] CreatedReadBufferOrdinary: 18 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] DiskReadElapsedMicroseconds: 42 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] SelectedRows: 1441668 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] SelectedBytes: 20183352 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] ContextLock: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] RealTimeMicroseconds: 245473 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] UserTimeMicroseconds: 194192 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] SystemTimeMicroseconds: 47839 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] SoftPageFaults: 34288 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] OSCPUWaitMicroseconds: 87 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] OSCPUVirtualTimeMicroseconds: 242025 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] OSReadChars: 8122368 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] OSWriteChars: 10240 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] MemoryTrackerUsage: 62068258 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] Query: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SelectQuery: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] FileOpen: 32 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] Seek: 18 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] ReadBufferFromFileDescriptorRead: 269 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] ReadBufferFromFileDescriptorReadBytes: 59908385 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] ReadCompressedBytes: 57026965 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] CompressedReadBufferBlocks: 2223 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] CompressedReadBufferBytes: 140000000 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OpenedFileCacheHits: 12 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OpenedFileCacheMisses: 32 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] IOBufferAllocs: 75 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] IOBufferAllocBytes: 14149190 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] ArenaAllocChunks: 79 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] ArenaAllocBytes: 201306112 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] MarkCacheHits: 18 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] MarkCacheMisses: 13 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] CreatedReadBufferOrdinary: 44 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] DiskReadElapsedMicroseconds: 16091 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] NetworkReceiveElapsedMicroseconds: 44 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] NetworkSendElapsedMicroseconds: 658 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] NetworkSendBytes: 28294 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SelectedParts: 5 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SelectedRanges: 5 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SelectedMarks: 1221 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SelectedRows: 10000000 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SelectedBytes: 140000000 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] ContextLock: 88 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] RWLockAcquiredReadLocks: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] RealTimeMicroseconds: 3756041 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] UserTimeMicroseconds: 2856635 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SystemTimeMicroseconds: 295380 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] SoftPageFaults: 221523 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OSCPUWaitMicroseconds: 712 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OSCPUVirtualTimeMicroseconds: 3151964 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OSWriteBytes: 4096 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OSReadChars: 59908096 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] OSWriteChars: 65536 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] QueryProfilerRuns: 9 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 0 ] MemoryTrackerUsage: 219393274 (gauge)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] DiskReadElapsedMicroseconds: 4019 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] DiskReadElapsedMicroseconds: 4000 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] MarkCacheHits: 6 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3022 ] OSCPUWaitMicroseconds: 16 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3017 ] QueryProfilerRuns: 2 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] FileOpen: 2 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] FileOpen: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] MarkCacheMisses: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] OpenedFileCacheHits: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3021 ] OpenedFileCacheMisses: 2 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] OpenedFileCacheMisses: 3 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 3019 ] QueryProfilerRuns: 1 (increment)
[laptop-5591] 2022.02.18 11:26:11 [ 11393 ] QueryProfilerRuns: 1 (increment)
```
| https://github.com/ClickHouse/ClickHouse/issues/34719 | https://github.com/ClickHouse/ClickHouse/pull/34749 | c15b5c2cc1ee8bde59b6f491f46ca39e9442a680 | 90ae785e539f3f443f97c9c308c426de73e47049 | "2022-02-18T10:29:14Z" | c++ | "2022-02-22T01:05:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,708 | ["tests/queries/0_stateless/02509_h3_arguments.reference", "tests/queries/0_stateless/02509_h3_arguments.sql"] | h3ToParent() issue on latest stable release 22.2.2.1 | Latest stable release 22.2.2.1
This query works fine:
```sql
select h3ToParent(641573946153969375, 1);
```
This doesn't. It used to work on 22.1.3.7
```sql
select h3ToParent(641573946153969375, arrayJoin([1,2]));
```
Error message:
```
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type UInt64 of argument 1 of function h3ToParent. Must be UInt64.. (ILLEGAL_COLUMN)
``` | https://github.com/ClickHouse/ClickHouse/issues/34708 | https://github.com/ClickHouse/ClickHouse/pull/44356 | 8d23d2f2f28bbccec309205d77f32d1388f78e03 | 58d849d73229a73e479dcc3a426e0c5943c527dc | "2022-02-18T03:24:48Z" | c++ | "2022-12-27T11:48:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,682 | ["tests/queries/0_stateless/02234_column_function_short_circuit.reference", "tests/queries/0_stateless/02234_column_function_short_circuit.sql"] | Column Function is not a contiguous block of memory | **Describe what's wrong**
Query:
```
select
float64Field1 * if(strField1 != '', 1.0, dictGetFloat64('sandbox.dict', 'float64Field', (strField1, toDate('2021-01-01'))))
+ if(strField2 != '', 1.0, dictGetFloat64('sandbox.dict', 'float64Field', (strField2, toDate('2021-01-01')))) * if(isFinite(float64Field2), float64Field2, 0)
from sandbox.data_table;
```
Error:
```
Received exception from server (version 21.9.2):
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Column Function is not a contiguous block of memory: while executing 'FUNCTION [compiled] plus(multiply(Float64, if(UInt8, 1. : Float64, Float64)), multiply(if(UInt8, 1. : Float64, Float64), if(UInt8, Float64, 0 : UInt8)))(float64Field1 :: 0, notEquals(strField1, '') :: 9, dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField1, toDate('2021-01-01'))) :: 7, notEquals(strField2, '') :: 10, dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField2, toDate('2021-01-01'))) :: 4, isFinite(float64Field2) :: 8, float64Field2 :: 1) -> plus(multiply(float64Field1, if(notEquals(strField1, ''), 1., dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField1, toDate('2021-01-01'))))), multiply(if(notEquals(strField2, ''), 1., dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField2, toDate('2021-01-01')))), if(isFinite(float64Field2), float64Field2, 0))) Float64 : 2'. (NOT_IMPLEMENTED)
```
**Does it reproduce on recent release?**
21.9.2
**How to reproduce**
Preparation steps to reproduce:
```
create database sandbox;
create table sandbox.dict_table
(
`strField` String,
`dateField` Date,
`float64Field` Float64
) Engine Log();
insert into sandbox.dict_table values ('SomeStr', toDate('2021-01-01'), 1.1), ('SomeStr2', toDate('2021-01-02'), 2.2);
create dictionary sandbox.dict
(
`strField` String,
`dateField` Date,
`float64Field` Float64
)
PRIMARY KEY strField, dateField
SOURCE (CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' PASSWORD '123' TABLE 'dict_table'))
LIFETIME(MIN 300 MAX 360)
LAYOUT (COMPLEX_KEY_HASHED());
create table sandbox.data_table
(
`float64Field1` Float64,
`float64Field2` Float64,
`strField1` String,
`strField2` String
) Engine Log();
insert into sandbox.data_table values (1.1, 1.2, 'SomeStr', 'SomeStr'), (2.1, 2.2, 'SomeStr2', 'SomeStr2');
```
Execute query several times (around 4 times):
```
select
float64Field1 * if(strField1 != '', 1.0, dictGetFloat64('sandbox.dict', 'float64Field', (strField1, toDate('2021-01-01'))))
+ if(strField2 != '', 1.0, dictGetFloat64('sandbox.dict', 'float64Field', (strField2, toDate('2021-01-01')))) * if(isFinite(float64Field2), float64Field2, 0)
from sandbox.data_table;
```
Error:
```
2022.02.17 15:27:50.031619 [ 38 ] {9b18efc6-4125-4075-ac7d-23b2792ee9f1} <Error> TCPHandler: Code: 48. DB::Exception: Column Function is not a contiguous block of memory: while executing 'FUNCTION [compiled] plus(multiply(Float64, if(UInt8, 1. : Float64, Float64)), multiply(if(UInt8, 1. : Float64, Float64), if(UInt8, Float64, 0 : UInt8)))(float64Field1 :: 0, notEquals(strField1, '') :: 9, dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField1, toDate('2021-01-01'))) :: 7, notEquals(strField2, '') :: 10, dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField2, toDate('2021-01-01'))) :: 4, isFinite(float64Field2) :: 8, float64Field2 :: 1) -> plus(multiply(float64Field1, if(notEquals(strField1, ''), 1., dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField1, toDate('2021-01-01'))))), multiply(if(notEquals(strField2, ''), 1., dictGetFloat64('sandbox.dict', 'float64Field', tuple(strField2, toDate('2021-01-01')))), if(isFinite(float64Field2), float64Field2, 0))) Float64 : 2'. (NOT_IMPLEMENTED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9366e7a in /usr/bin/clickhouse
1. DB::IColumn::getRawData() const @ 0x10503dc4 in /usr/bin/clickhouse
2. DB::getColumnData(DB::IColumn const*) @ 0x10f26900 in /usr/bin/clickhouse
3. DB::LLVMExecutableFunction::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x108354e7 in /usr/bin/clickhouse
4. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x1019327e in /usr/bin/clickhouse
5. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x10193892 in /usr/bin/clickhouse
6. DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x1081d8f2 in /usr/bin/clickhouse
7. DB::ExpressionTransform::transform(DB::Chunk&) @ 0x119767fc in /usr/bin/clickhouse
8. DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0xea01310 in /usr/bin/clickhouse
9. DB::ISimpleTransform::work() @ 0x117e0947 in /usr/bin/clickhouse
10. ? @ 0x1181e99d in /usr/bin/clickhouse
11. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1181b171 in /usr/bin/clickhouse
12. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x118191cf in /usr/bin/clickhouse
13. DB::PipelineExecutor::execute(unsigned long) @ 0x11818f99 in /usr/bin/clickhouse
14. ? @ 0x11825f1f in /usr/bin/clickhouse
15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x93a7e9f in /usr/bin/clickhouse
16. ? @ 0x93ab783 in /usr/bin/clickhouse
17. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
18. clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
```
**Workaround**
Seems like disabling **short_circuit_function_evaluation** setting fixes the issue
`SET short_circuit_function_evaluation = 'disable';` | https://github.com/ClickHouse/ClickHouse/issues/34682 | https://github.com/ClickHouse/ClickHouse/pull/35247 | eb1192934cb49f81c2d05a7c5771130cf5cccf83 | 4712499b83cee3f1514e54999409188c5347b7a0 | "2022-02-17T12:37:21Z" | c++ | "2022-03-14T01:26:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,669 | ["tests/queries/0_stateless/02512_array_join_name_resolution.reference", "tests/queries/0_stateless/02512_array_join_name_resolution.sql"] | Cannot find column in source stream with array join | ```sql
CREATE TABLE x ( `arr.key` Array(String), `arr.value` Array(String), `n` String ) ENGINE = Memory;
20.8.17.25 / OK
SELECT
key,
any(toString(n))
FROM
(
SELECT
arr.key AS key,
n
FROM x
ARRAY JOIN arr
)
GROUP BY key
Ok.
0 rows in set. Elapsed: 0.002 sec.
21.8.13 / 22.2.1 / the same query fails
SELECT
key,
any(toString(n))
FROM
(
SELECT
arr.key AS key,
n
FROM x
ARRAY JOIN arr
)
GROUP BY key
Query id: bfd66237-0975-465f-bc57-f97d4621118d
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.8.13):
Code: 8. DB::Exception: Received from localhost:9000. DB::Exception: Cannot find column `key` in source stream, there are only columns: [arr.key, toString(n)].
``` | https://github.com/ClickHouse/ClickHouse/issues/34669 | https://github.com/ClickHouse/ClickHouse/pull/44359 | 2eb6e87f59bb8c61ea78517b9f36b6e8700040b0 | 6ace8f13dbae6e9533c23f3a8b74c38a6bc6c540 | "2022-02-17T00:23:07Z" | c++ | "2022-12-20T10:12:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,660 | ["src/Storages/AlterCommands.cpp"] | Alter table drop alias column takes a lot of time and mutates all parts | 22.2.1.4422.
```
create table test_alias (K Int64, D Date, S String, B Int64 Alias length(S)) Engine=MergeTree order by K;
insert into test_alias select number, today(), 'sdfsdfsdfsdfsdfds' from numbers(10000000);
session1:
clickhouse-benchmark -c 10 <<< "insert into test_alias select number, today(), 'sdfsdfsdfsdfsdfds' from numbers(10000)"
session2:
alter table test_alias drop column B;
0 rows in set. Elapsed: 251.043 sec.
```
especially it can be a problem with S3 disks
Alias columns do not persist in parts even in columns.txt | https://github.com/ClickHouse/ClickHouse/issues/34660 | https://github.com/ClickHouse/ClickHouse/pull/34786 | a3e6552fa598c310a55ad0a12acc714da290353d | 5ac8cdbc69c12ab1d95efefbc3c432ee319f4107 | "2022-02-16T18:59:39Z" | c++ | "2022-02-21T13:11:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,647 | ["src/Compression/CompressedReadBufferFromFile.cpp", "tests/queries/0_stateless/02267_empty_arrays_read_reverse.reference", "tests/queries/0_stateless/02267_empty_arrays_read_reverse.sql"] | Column compression broken few minutes after data inserted | Clickhouse version: 22.1.3.
I'm creating and importing data to the new table based on join from two tables like this:
`insert into my_database_{country_iso}.fulltext_new select keyword, stems, search_volumes.search_volume, difficulty, cpc, monthly_sv, peak_month, yoy_change, serp_features from my_database_{country_iso}.search_volumes as search_volumes final left any join my_database_{country_iso}.keyword_data using (keyword) SETTINGS join_use_nulls=1, join_any_take_last_row=1`
Structure of the output table is 1:1 with selected columns:
```
`keyword` String,
`stems` Array(String),
`search_volume` Int32,
`difficulty` Int8 Default -100,
`cpc` Float32,
`monthly_sv` String,
`peak_month` Date,
`yoy_change` Float32,
`serp_features` Array(String),
```
After importing those data everything works fine around 1 minute and after that (using the same select queries) I'm getting this error for some queries:
```
Received exception from server (version 22.1.3):
Code: 271. DB::Exception: Received from localhost:9000. DB::Exception: Data compressed with different methods, given method byte 0x1b, previous method byte 0x82: (while reading column serp_features): (while reading from part /var/lib/clickhouse/store/f3d/f3d3328c-24b4-4f71-b40e-b2650ac5229e/all_1_34_2/ from mark 64722 with max_rows_to_read = 195): While executing MergeTreeReverse. (CANNOT_DECOMPRESS)
```
The SELECT is for example following:
`select * from my_database_us.fulltext where hasAll(stems, ['something']) order by search_volume desc`
What is stranger - the query seems to be working until I use ordering using the column search_volume but I totally don't know why because from the error log it seems like it has some problem with column serp_features. But what is strangest for me is that it works after importing. But just for few seconds/minutes and then start showing this error (even for the same queries like before).
Full log:
```
2022.02.16 12:57:26.870809 [ 334594 ] {55202cee-124e-4626-a3b4-e31d5388e410} <Error> TCPHandler: Code: 271. DB::Exception: Data compressed with different methods, given method byte 0x1b, previous method byte 0x82: (while reading column serp_features): (while reading from part /var/lib/clickhouse/store/f3d/f3d3328c-24b4-4f71-b40e-b2650ac5229e/all_1_34_2/ from mark 64722 with max_rows_to_read = 195): While executing MergeTreeReverse. (CANNOT_DECOMPRESS), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa82d07a in /usr/bin/clickhouse
1. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) @ 0x130929f8 in /usr/bin/clickhouse
2. DB::CompressedReadBufferFromFile::nextImpl() @ 0x130940d5 in /usr/bin/clickhouse
3. ? @ 0x1328c4c2 in /usr/bin/clickhouse
4. DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x132536f5 in /usr/bin/clickhouse
5. DB::SerializationArray::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x1325f4d1 in /usr/bin/clickhouse
6. DB::MergeTreeReaderWide::readData(DB::NameAndTypePair const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, bool, unsigned long, unsigned long, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >&, bool) @ 0x143a5f0f in /usr/bin/clickhouse
7. DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x143a4e8b in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x14b1fb8e in /usr/bin/clickhouse
9. DB::MergeTreeRangeReader::continueReadingChain(DB::MergeTreeRangeReader::ReadResult&, unsigned long&) @ 0x14b23c79 in /usr/bin/clickhouse
10. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x14b22cd3 in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x14b18c08 in /usr/bin/clickhouse
12. DB::MergeTreeReverseSelectProcessor::readFromPart() @ 0x14b37485 in /usr/bin/clickhouse
13. DB::MergeTreeBaseSelectProcessor::generate() @ 0x14b18480 in /usr/bin/clickhouse
14. DB::ISource::tryGenerate() @ 0x148414b5 in /usr/bin/clickhouse
15. DB::ISource::work() @ 0x1484107a in /usr/bin/clickhouse
16. DB::SourceWithProgress::work() @ 0x14a8c662 in /usr/bin/clickhouse
17. DB::ExecutionThreadContext::executeTask() @ 0x14860b23 in /usr/bin/clickhouse
18. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1485539e in /usr/bin/clickhouse
19. ? @ 0x14856b22 in /usr/bin/clickhouse
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa86f4b7 in /usr/bin/clickhouse
21. ? @ 0xa872ebd in /usr/bin/clickhouse
22. ? @ 0x7fef318ec609 in ?
23. __clone @ 0x7fef31813293 in ?
```
It seems like some problem with column compression. But both tables use native compression (nothing special). Two input tables were created in different CH versions so if CH changed compression then it could be something with that but I don't think so. | https://github.com/ClickHouse/ClickHouse/issues/34647 | https://github.com/ClickHouse/ClickHouse/pull/36215 | 791454678b27ca2c8a1180065c266840c823b257 | c76b9cc9f5c1c7454ae5992f7aedd4ed26f2dd13 | "2022-02-16T13:13:47Z" | c++ | "2022-04-14T11:51:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,626 | ["tests/queries/0_stateless/02808_aliases_inside_case.reference", "tests/queries/0_stateless/02808_aliases_inside_case.sql"] | alias into CASE | > the CASE statement allows you to assign an alias inside yourself, which is not described in the SQL standards
```
with arrayJoin([1,2]) as arg
select arg,
(case
when arg = 1
then 1 as one
when arg = 2
then one / 2
end) as imposible
--arg;imposible
--1;1
--2;0.5
```
release version = 21.11.6.7 | https://github.com/ClickHouse/ClickHouse/issues/34626 | https://github.com/ClickHouse/ClickHouse/pull/51357 | 1eef5086d465e7c365678a5dc333de3e240c148d | ea158254941bf00b5934efaf7dfef87fd0780519 | "2022-02-16T02:25:03Z" | c++ | "2023-07-11T21:47:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,604 | ["docs/en/sql-reference/functions/index.md", "docs/ru/sql-reference/functions/index.md", "src/Interpreters/ExternalUserDefinedExecutableFunctionsLoader.cpp", "src/Interpreters/UserDefinedExecutableFunction.h", "src/Interpreters/UserDefinedExecutableFunctionFactory.cpp", "tests/integration/test_executable_user_defined_function/functions/test_function_config.xml", "tests/integration/test_executable_user_defined_function/test.py", "tests/integration/test_executable_user_defined_function/user_scripts/input_sum_json_named_args.py", "tests/integration/test_executable_user_defined_function/user_scripts/input_sum_json_partially_named_args.py", "tests/integration/test_executable_user_defined_function/user_scripts/input_sum_json_unnamed_args.py"] | Executable UDF: read / write other data formats than TabSeparated | Hi,
I'm trying to figure out how read / write some data formats that are not TabSeparated (e.g. [Native](https://clickhouse.com/docs/en/interfaces/formats/#native) or [JSONEachRow](https://clickhouse.com/docs/en/interfaces/formats/#jsoneachrow)) in my EUDF - C++ script.
Context:
I already succeed to implement some EUDF but always with TabSeparated or CSV. However, in one of my script I'm looking for performance on a quite big array. The query is executed in ~10sec while my script takes ~2sec to execute (bench with RDTSC). Since the query is just calling my script (called in the same way that the example), I guess that the data format conversion may be too heavy.
In order to benchmark several types and find the best compromise between fastest input/output transfert and serialisation/deserialisation speed, I made a very simple c++ code. This code writes in standard output, what is read from standard input (see below). However, as you already guessed, I'm struggling with some data formats.
I have a table (`table1`) with one column of `Array(UInt64)` that I send as one block of `Array(Array(UInt64))` to my script using `groupArray`.
In the below example I'm just writing what I read from the client.
info:
Clickhouse version 22.2.1
This is what I use for my tests.
my_function.xml
```
<clickhouse>
<functions>
<type>executable</type>
<name>data_debug</name>
<return_type>Array(Array(UInt64))</return_type>
<argument>
<type>Array(Array(UInt64))</type>
</argument>
<format>Native</format>
<command>data_debug</command>
</functions>
</clickhouse>
```
My c++ script data_debug.cpp:
```
#include <iostream>
int main(int argc, char **argv)
{
std::string line;
std::cin.tie(nullptr);
std::cin.sync_with_stdio(false);
std::cout.tie(nullptr);
std::cout.sync_with_stdio(false);
while(std::cin >> line) {
std::cout << line << "\n";
std::cout.flush();
}
return 0;
}
```
Compile with:
`g++ -std=c++11 -O3 data_debug.cpp -o data_debug`
Executed in CH client as follow:
`SELECT data_debug(groupArray(table1.journey)) FROM table1`
Leads to this error:
```
Received exception from server (version 22.2.1):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column groupArray(journey) in block. There are only columns: result: While executing Native: While executing ShellCommandSource. (NOT_FOUND_COLUMN_IN_BLOCK)
```
When I use the JSONEachRows format I get:
```
Received exception from server (version 22.2.1):
Code: 117. DB::Exception: Received from localhost:9000. DB::Exception: Unknown field found while parsing JSONEachRow format: groupArray(journey): (at row 1)
: While executing ParallelParsingBlockInputFormat: While executing ShellCommandSource: while executing 'FUNCTION data_debug(groupArray(journey) :: 0) -> data_debug(groupArray(journey)) Array(Array(UInt64)) : 1'. (INCORRECT_DATA)
```
Can you tell me what I'm doing wrong with these two types ? Cause if even if CH doesn't accept what he sent itself, I don't know what to do differently. | https://github.com/ClickHouse/ClickHouse/issues/34604 | https://github.com/ClickHouse/ClickHouse/pull/34653 | 1df43a7f57abc0b117056000c8236545a7c55b2b | 174257dad098c24b48639ff9153cd6bdb8fba351 | "2022-02-15T12:09:08Z" | c++ | "2022-02-21T23:02:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,586 | ["src/Storages/System/attachInformationSchemaTables.cpp", "tests/queries/0_stateless/02206_information_schema_show_database.reference", "tests/queries/0_stateless/02206_information_schema_show_database.sql"] | Wrong create_table_query for INFORMATION_SCHEMA | ```
SELECT create_table_query
FROM system.tables
WHERE table = 'usual_view'
Query id: f9ef8cf7-25ee-42af-bafa-26d506332c35
ββcreate_table_queryβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE VIEW default.usual_view (`number` UInt64) AS SELECT * FROM numbers(10) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.004 sec.
```
But
```
SELECT create_table_query
FROM system.tables
WHERE table = 'COLUMNS'
Query id: e7032015-4031-47d6-a5e3-c9ee5d787455
ββcreate_table_queryββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ATTACH VIEW COLUMNS (`table_catalog` String, `table_schema` String, `table_name` String, `column_name` String, `ordinal_position` UInt64, `column_default` String, `is_nullable` UInt8, `data_type` String, `character_maximum_length` Nullable(UInt64), `character_octet_length` Nullable(UInt64), `numeric_precision` Nullable(UInt64), `numeric_precision_radix` Nullable(UInt64), `numeric_scale` Nullable(UInt64), `datetime_precision` Nullable(UInt64), `character_set_catalog` Nullable(String), `character_set_schema` Nullable(String), `character_set_name` Nullable(String), `collation_catalog` Nullable(String), `collation_schema` Nullable(String), `collation_name` Nullable(String), `domain_catalog` Nullable(String), `domain_schema` Nullable(String), `domain_name` Nullable(String), `column_comment` String, `column_type` String, `TABLE_CATALOG` String ALIAS table_catalog, `TABLE_SCHEMA` String ALIAS table_schema, `TABLE_NAME` String ALIAS table_name, `COLUMN_NAME` String ALIAS column_name, `ORDINAL_POSITION` UInt64 ALIAS ordinal_position, `COLUMN_DEFAULT` String ALIAS column_default, `IS_NULLABLE` UInt8 ALIAS is_nullable, `DATA_TYPE` String ALIAS data_type, `CHARACTER_MAXIMUM_LENGTH` Nullable(UInt64) ALIAS character_maximum_length, `CHARACTER_OCTET_LENGTH` Nullable(UInt64) ALIAS character_octet_length, `NUMERIC_PRECISION` Nullable(UInt64) ALIAS numeric_precision, `NUMERIC_PRECISION_RADIX` Nullable(UInt64) ALIAS numeric_precision_radix, `NUMERIC_SCALE` Nullable(UInt64) ALIAS numeric_scale, `DATETIME_PRECISION` Nullable(UInt64) ALIAS datetime_precision, `CHARACTER_SET_CATALOG` Nullable(String) ALIAS character_set_catalog, `CHARACTER_SET_SCHEMA` Nullable(String) ALIAS character_set_schema, `CHARACTER_SET_NAME` Nullable(String) ALIAS character_set_name, `COLLATION_CATALOG` Nullable(String) ALIAS collation_catalog, `COLLATION_SCHEMA` Nullable(String) ALIAS collation_schema, `COLLATION_NAME` Nullable(String) ALIAS collation_name, `DOMAIN_CATALOG` Nullable(String) ALIAS domain_catalog, `DOMAIN_SCHEMA` Nullable(String) ALIAS domain_schema, `DOMAIN_NAME` Nullable(String) ALIAS domain_name, `COLUMN_COMMENT` String ALIAS column_comment, `COLUMN_TYPE` String ALIAS column_type) AS SELECT database AS table_catalog, database AS table_schema, table AS table_name, name AS column_name, position AS ordinal_position, default_expression AS column_default, type LIKE 'Nullable(%)' AS is_nullable, type AS data_type, character_octet_length AS character_maximum_length, character_octet_length, numeric_precision, numeric_precision_radix, numeric_scale, datetime_precision, NULL AS character_set_catalog, NULL AS character_set_schema, NULL AS character_set_name, NULL AS collation_catalog, NULL AS collation_schema, NULL AS collation_name, NULL AS domain_catalog, NULL AS domain_schema, NULL AS domain_name, comment AS column_comment, type AS column_type FROM system.columns β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
ATTACH vs CREATE + missing the dbname. That may lead to the recreation of the VIEW in the different DB (by some backup / recovery tools).
| https://github.com/ClickHouse/ClickHouse/issues/34586 | https://github.com/ClickHouse/ClickHouse/pull/35480 | 778416f5764072179b5f4bbdef655b7a62a55c59 | 2fdc937ae17c97fcf966ba5b1819a83091c0e590 | "2022-02-14T14:21:06Z" | c++ | "2022-03-21T19:20:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,546 | ["src/Storages/StorageLog.cpp"] | Data race in `FileChecker` in `StorageLog::truncate` | #34528
**Describe the bug**
[A link to the report](https://s3.amazonaws.com/clickhouse-test-reports/34528/6a8e35930ffc25f0574179c5ad93635cfe073121/stress_test__thread__actions_.html)
| https://github.com/ClickHouse/ClickHouse/issues/34546 | https://github.com/ClickHouse/ClickHouse/pull/34558 | 380d9afb2c0ef438d0afc245b381b0b32d8061ca | ae1da31d199eae3ed7c9db6cc8b59a34839e9fe3 | "2022-02-12T07:17:41Z" | c++ | "2022-02-13T13:33:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,531 | ["docs/en/interfaces/http.md", "docs/ru/interfaces/http.md", "src/Server/HTTPHandler.cpp", "src/Server/HTTPHandler.h", "tests/integration/test_http_handlers_config/test.py", "tests/integration/test_http_handlers_config/test_dynamic_handler/config.xml", "tests/integration/test_http_handlers_config/test_predefined_handler/config.xml"] | A setting `content_type` to force specified `Content-Type` in output. | I want to add a `predefined_query_handler` to serve a single JSON from my query.
For example
```xml
<rule>
<url>/my/url</url>
<methods>GET</methods>
<handler>
<type>predefined_query_handler</type>
<query>select '{"my": "json"}' format JSONAsString</query>
</handler>
</rule>
```
I can use `RawBLOB` but it sets up Content-Type 'text/plain' instead of 'application/json'.
I think JSONAsString should also work with output.
| https://github.com/ClickHouse/ClickHouse/issues/34531 | https://github.com/ClickHouse/ClickHouse/pull/34916 | 333cbe4a3f8818e851bdad436d7033541a403dd4 | 3755466e8dd82c816a9e2f71d28d3fdb958fbe01 | "2022-02-11T15:07:44Z" | c++ | "2022-05-08T13:36:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,525 | ["src/Client/MultiplexedConnections.cpp", "tests/queries/0_stateless/02221_parallel_replicas_bug.reference", "tests/queries/0_stateless/02221_parallel_replicas_bug.sh"] | Logical error: 'Coordinator for parallel reading from replicas is not initialized'. | How to reproduce: run test 01099_parallel_distributed_insert_select.sql with enabled setting `allow_experimental_parallel_reading_from_replicas`:
```
ch-client --allow_experimental_parallel_reading_from_replicas=1 -nmT < 01099_parallel_distributed_insert_select.sql > /dev/null
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:56.946377 [ 29654 ] {26b68ad3-0afc-46c7-9344-f1c4ec98223a} <Fatal> : Logical error: 'Coordinator for parallel reading from replicas is not initialized'.
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:56.948188 [ 29739 ] <Fatal> BaseDaemon: ########################################
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:56.948595 [ 29739 ] <Fatal> BaseDaemon: (version 22.2.1.1, build id: 3214146F63D9D503) (from thread 29654) (query_id: 26b68ad3-0afc-46c7-9344-f1c4ec98223a) (query: INSERT INTO distributed_01099_b SELECT * from distributed_01099_a;) Received signal Aborted (6)
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:56.949127 [ 29739 ] <Fatal> BaseDaemon:
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:56.949662 [ 29739 ] <Fatal> BaseDaemon: Stack trace: 0x7f92be05f18b 0x7f92be03e859 0x7f92c07be1f9 0x7f92c07be309 0x7f92a062d164 0x7f92a062bcb9 0x7f92a062cdd8 0x7f9295e8b1f9 0x7f92972b5a44 0x7f9295eac8c2 0x7f9297066ed9 0x7f9297066dd5 0x7f929706dfc1 0x7f929706e297 0x7f929706fa5f 0x7f929706f9dd 0x7f929706f981 0x7f929706f892 0x7f929706f75b 0x7f929706f61d 0x7f929706f5dd 0x7f929706f5b5 0x7f929706f580 0x7f92c08b4f66 0x7f92c08abd55 0x7f92c08ab715 0x7f92c08b2064 0x7f92c08b1fdd 0x7f92c08b1f05 0x7f92c08b1862 0x7f92be3c1609 0x7f92be13b293
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:57.445361 [ 29739 ] <Fatal> BaseDaemon: 4. raise @ 0x8ef218b in /home/avogar/ClickHouse/build/src/AggregateFunctions/libclickhouse_aggregate_functionsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:57.933354 [ 29739 ] <Fatal> BaseDaemon: 5. abort @ 0x8ed1859 in /home/avogar/ClickHouse/build/src/AggregateFunctions/libclickhouse_aggregate_functionsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.030223 [ 29739 ] <Fatal> BaseDaemon: 6. ./build/../src/Common/Exception.cpp:52: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x3391f9 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.113560 [ 29739 ] <Fatal> BaseDaemon: 7. ./build/../src/Common/Exception.cpp:59: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x339309 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.322790 [ 29739 ] <Fatal> BaseDaemon: 8. ./build/../src/QueryPipeline/RemoteQueryExecutor.cpp:474: DB::RemoteQueryExecutor::processMergeTreeReadTaskRequest(DB::PartitionReadRequest) @ 0x16e164 in /home/avogar/ClickHouse/build/src/libclickhouse_querypipelined.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.528226 [ 29739 ] <Fatal> BaseDaemon: 9. ./build/../src/QueryPipeline/RemoteQueryExecutor.cpp:373: DB::RemoteQueryExecutor::processPacket(DB::Packet) @ 0x16ccb9 in /home/avogar/ClickHouse/build/src/libclickhouse_querypipelined.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.739597 [ 29739 ] <Fatal> BaseDaemon: 10. ./build/../src/QueryPipeline/RemoteQueryExecutor.cpp:331: DB::RemoteQueryExecutor::read(std::__1::unique_ptr<DB::RemoteQueryExecutorReadContext, std::__1::default_delete<DB::RemoteQueryExecutorReadContext> >&) @ 0x16ddd8 in /home/avogar/ClickHouse/build/src/libclickhouse_querypipelined.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.815154 [ 29739 ] <Fatal> BaseDaemon: 11. ./build/../src/Processors/Sources/RemoteSource.cpp:74: DB::RemoteSource::tryGenerate() @ 0xd71f9 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_sourcesd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.881761 [ 29739 ] <Fatal> BaseDaemon: 12. ./build/../src/Processors/ISource.cpp:53: DB::ISource::work() @ 0xa0a44 in /home/avogar/ClickHouse/build/src/libclickhouse_processorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.941635 [ 29739 ] <Fatal> BaseDaemon: 13. ./build/../src/Processors/Sources/SourceWithProgress.cpp:67: DB::SourceWithProgress::work() @ 0xf88c2 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_sourcesd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.970316 [ 29739 ] <Fatal> BaseDaemon: 14. ./build/../src/Processors/Executors/ExecutionThreadContext.cpp:45: DB::executeJob(DB::IProcessor*) @ 0xa3ed9 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:58.995165 [ 29739 ] <Fatal> BaseDaemon: 15. ./build/../src/Processors/Executors/ExecutionThreadContext.cpp:63: DB::ExecutionThreadContext::executeTask() @ 0xa3dd5 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.093561 [ 29739 ] <Fatal> BaseDaemon: 16. ./build/../src/Processors/Executors/PipelineExecutor.cpp:213: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0xaafc1 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.190767 [ 29739 ] <Fatal> BaseDaemon: 17. ./build/../src/Processors/Executors/PipelineExecutor.cpp:178: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0xab297 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.302820 [ 29739 ] <Fatal> BaseDaemon: 18. ./build/../src/Processors/Executors/PipelineExecutor.cpp:306: DB::PipelineExecutor::executeImpl(unsigned long)::$_1::operator()() const @ 0xaca5f in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.417138 [ 29739 ] <Fatal> BaseDaemon: 19. ./build/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&) @ 0xac9dd in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.532980 [ 29739 ] <Fatal> BaseDaemon: 20. ./build/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0xac981 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.650354 [ 29739 ] <Fatal> BaseDaemon: 21. ./build/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&) @ 0xac892 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.744769 [ 29739 ] <Fatal> BaseDaemon: 22. ./build/../src/Common/ThreadPool.h:188: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()::operator()() @ 0xac75b in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.860269 [ 29739 ] <Fatal> BaseDaemon: 23. ./build/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&) @ 0xac61d in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:13:59.976063 [ 29739 ] <Fatal> BaseDaemon: 24. ./build/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&...) @ 0xac5dd in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.092291 [ 29739 ] <Fatal> BaseDaemon: 25. ./build/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'(), void ()>::operator()() @ 0xac5b5 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.208096 [ 29739 ] <Fatal> BaseDaemon: 26. ./build/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0xac580 in /home/avogar/ClickHouse/build/src/libclickhouse_processors_executorsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.296988 [ 29739 ] <Fatal> BaseDaemon: 27. ./build/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x42ff66 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.373285 [ 29739 ] <Fatal> BaseDaemon: 28. ./build/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x426d55 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.441348 [ 29739 ] <Fatal> BaseDaemon: 29. ./build/../src/Common/ThreadPool.cpp:277: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x426715 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.522380 [ 29739 ] <Fatal> BaseDaemon: 30. ./build/../src/Common/ThreadPool.cpp:142: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x42d064 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.611021 [ 29739 ] <Fatal> BaseDaemon: 31. ./build/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x42cfdd in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.698479 [ 29739 ] <Fatal> BaseDaemon: 32. ./build/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x42cf05 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.785466 [ 29739 ] <Fatal> BaseDaemon: 33. ./build/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x42c862 in /home/avogar/ClickHouse/build/src/libclickhouse_common_iod.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:00.793666 [ 29739 ] <Fatal> BaseDaemon: 34. ? @ 0x141609 in /home/avogar/ClickHouse/build/contrib/libcxx-cmake/libcxxd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:01.285265 [ 29739 ] <Fatal> BaseDaemon: 35. clone @ 0x8fce293 in /home/avogar/ClickHouse/build/src/AggregateFunctions/libclickhouse_aggregate_functionsd.so
[avogar-dev.vla.yp-c.yandex.net] 2022.02.11 16:14:01.285636 [ 29739 ] <Fatal> BaseDaemon: Calculated checksum of the binary: 3D3723E8BAB43521A090B28438FFD59E. There is no information about the reference checksum.
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 22.2.1.1)
(query: INSERT INTO distributed_01099_b SELECT * from distributed_01099_a;)
``` | https://github.com/ClickHouse/ClickHouse/issues/34525 | https://github.com/ClickHouse/ClickHouse/pull/34613 | 72e75fdaf57678d94a4774af35ac2922888d12cb | 22821ccac9bfc890a3124ec594c93ff6d0af4439 | "2022-02-11T13:14:37Z" | c++ | "2022-02-15T20:29:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,514 | ["src/Parsers/ASTSystemQuery.cpp", "src/Parsers/ParserSystemQuery.cpp"] | Reload Function bug report | hi,
v21.12.3.1-stable
clickhouse-client execute 'system RELOAD FUNCTION on cluster cluster test_function;'
Actual execution differs from expectations.

| https://github.com/ClickHouse/ClickHouse/issues/34514 | https://github.com/ClickHouse/ClickHouse/pull/34696 | 6280efc0eb73fe4412241b7c4207af68f21c54a1 | 14c54c40f6c1ec1b8e01930235758d0e7f5aafc1 | "2022-02-11T03:35:19Z" | c++ | "2022-02-21T07:24:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,493 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02207_key_condition_floats.reference", "tests/queries/0_stateless/02207_key_condition_floats.sql"] | wrong float comparison with constant for Float32 |
create table test1 (a Float32, b Float32 ) engine = MergeTree order by a ;
insert into test1 values (0.1,0.1), (0.2,0.2);
select count() from test1 where b = 0.1;
0
select count() from test1 where b = toFloat32(0.1);
1
select count() from test1 where a > 0;
0
select count() from test1 where a > 0.0;
2
select count() from test1 where b > 0;
2
version 22.1.3.7
| https://github.com/ClickHouse/ClickHouse/issues/34493 | https://github.com/ClickHouse/ClickHouse/pull/34528 | ea71dc9d110113f0afa1e2044cd5c90bf7abbbd3 | 747b6b20584fd2b619d2977543e2e1a8fb12b447 | "2022-02-10T11:22:03Z" | c++ | "2022-02-12T07:19:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,428 | ["src/Storages/System/StorageSystemAsynchronousInserts.h", "tests/queries/0_stateless/02117_show_create_table_system.reference"] | system.asynchronous_inserts engine lacks System prefix | ```
SELECT
name,
engine
FROM system.tables
WHERE (engine NOT LIKE '%MergeTree') AND (database = 'system')
Query id: 7db15642-bff0-4d07-b3af-9a2495ca9185
ββnameββββββββββββββββββββββββββββ¬βengineββββββββββββββββββββββββββββββ
β aggregate_function_combinators β SystemAggregateFunctionCombinators β
β asynchronous_inserts β AsynchronousInserts β <-- ups, no System
β asynchronous_metrics β SystemAsynchronousMetrics β
β build_options β SystemBuildOptions β
β clusters β SystemClusters β
β collations β SystemTableCollations β
β columns β SystemColumns β
β contributors β SystemContributors β
β current_roles β SystemCurrentRoles β
β data_skipping_indices β SystemDataSkippingIndices β
β data_type_families β SystemTableDataTypeFamilies β
β databases β SystemDatabases β
β detached_parts β SystemDetachedParts β
β dictionaries β SystemDictionaries β
β disks β SystemDisks β
β distributed_ddl_queue β SystemDDLWorkerQueue β
β distribution_queue β SystemDistributionQueue β
β enabled_roles β SystemEnabledRoles β
β errors β SystemErrors β
β events β SystemEvents β
β formats β SystemFormats β
β functions β SystemFunctions β
β grants β SystemGrants β
β graphite_retentions β SystemGraphite β
β licenses β SystemLicenses β
β macros β SystemMacros β
β merge_tree_settings β SystemMergeTreeSettings β
β merges β SystemMerges β
β metrics β SystemMetrics β
β models β SystemModels β
β mutations β SystemMutations β
β numbers β SystemNumbers β
β numbers_mt β SystemNumbers β
β one β SystemOne β
β part_moves_between_shards β SystemShardMoves β
β parts β SystemParts β
β parts_columns β SystemPartsColumns β
β privileges β SystemPrivileges β
β processes β SystemProcesses β
β projection_parts β SystemProjectionParts β
β projection_parts_columns β SystemProjectionPartsColumns β
β quota_limits β SystemQuotaLimits β
β quota_usage β SystemQuotaUsage β
β quotas β SystemQuotas β
β quotas_usage β SystemQuotasUsage β
β replicas β SystemReplicas β
β replicated_fetches β SystemReplicatedFetches β
β replicated_merge_tree_settings β SystemReplicatedMergeTreeSettings β
β replication_queue β SystemReplicationQueue β
β rocksdb β SystemRocksDB β
β role_grants β SystemRoleGrants β
β roles β SystemRoles β
β row_policies β SystemRowPolicies β
β settings β SystemSettings β
β settings_profile_elements β SystemSettingsProfileElements β
β settings_profiles β SystemSettingsProfiles β
β stack_trace β SystemStackTrace β
β storage_policies β SystemStoragePolicies β
β table_engines β SystemTableEngines β
β table_functions β SystemTableFunctions β
β tables β SystemTables β
β time_zones β SystemTimeZones β
β user_directories β SystemUserDirectories β
β users β SystemUsers β
β warnings β SystemWarnings β
β zeros β SystemZeros β
β zeros_mt β SystemZeros β
ββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ
67 rows in set. Elapsed: 0.005 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/34428 | https://github.com/ClickHouse/ClickHouse/pull/34429 | d680a017e09f409da64a71f994a3a49ec5b8f2ab | 9bb2eba281d77d104b90c19163bfb267df5027f6 | "2022-02-08T18:21:28Z" | c++ | "2022-02-12T07:08:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,407 | ["src/AggregateFunctions/AggregateFunctionNull.cpp", "tests/queries/0_stateless/02888_single_state_nullable_type.reference", "tests/queries/0_stateless/02888_single_state_nullable_type.sql"] | -SimpleState with nullable argument creates Nullable(SimpleAggregateFunction(...)) | ```
SELECT toTypeName(minSimpleState(toNullable(0)))
Query id: ca70501a-76f0-400e-89c2-3539a7ce6740
ββtoTypeName(minSimpleState(toNullable(0)))ββββββ
β Nullable(SimpleAggregateFunction(min, UInt8)) β
βββββββββββββββββββββββββββββββββββββββββββββββββ
```
Should be
```
SimpleAggregateFunction(min, Nullable(DateTime))
``` | https://github.com/ClickHouse/ClickHouse/issues/34407 | https://github.com/ClickHouse/ClickHouse/pull/55030 | c90dbe94ea966166174eda25c74094940cc8792a | ce734149f7190021a0c3fa628d4623027074a867 | "2022-02-08T12:04:49Z" | c++ | "2023-09-28T02:01:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,400 | ["src/Processors/Formats/Impl/TSKVRowInputFormat.cpp", "src/Processors/Transforms/CreatingSetsTransform.h"] | AddressSanitizer: heap-use-after-free DB::ExecutingGraph::updateNode | query:
```
SELECT number FROM remote('127.0.0.{3|2}', numbers(256)) WHERE number GLOBAL IN (SELECT number FROM numbers(9223372036854775807)) SETTINGS async_socket_for_remote = 1, use_hedge
d_requests = 1, sleep_in_send_data_ms = 10, receive_data_timeout_ms = 1
```
Log
```
2022.02.07 16:15:09.686401 [ 461 ] {} <Fatal> BaseDaemon: ########################################
2022.02.07 16:15:09.686700 [ 461 ] {} <Fatal> BaseDaemon: (version 22.2.1.4050, build id: 3259743F56FAF964) (from thread 201) (query_id: 507ad9c1-ab5b-455b-9185-864264994977) (query: SELECT number FROM remote('127.0.0.{3|2}', numbers(256)) WHERE number GLOBAL IN (SELECT number FROM numbers(9223372036854775807)) SETTINGS async_socket_for_remote = 1, use_hedged_requests = 1, sleep_in_send_data_ms = 10, receive_data_timeout_ms = 1) Received signal Unknown signal (-3)
2022.02.07 16:15:09.686864 [ 461 ] {} <Fatal> BaseDaemon: Sanitizer trap.
2022.02.07 16:15:09.687035 [ 461 ] {} <Fatal> BaseDaemon: Stack trace: 0xcd32bc7 0x27e0a825 0xcc4bf76 0xcc330a1 0xcc34be6 0xcc35458 0x2c832b2b 0x2c82335a 0x2c8227d8 0x2c85766e 0x2ccf70e8 0x2877d03c 0x287a4187 0x2c71514e 0x2c743300 0x333bcb4f 0x333bd8c1 0x33897de6 0x3389109a 0x7f63f6d3e609 0x7f63f6c65293
2022.02.07 16:15:09.712511 [ 461 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./obj-x86_64-linux-gnu/../src/Common/StackTrace.cpp:305: StackTrace::tryCapture()
2022.02.07 16:15:09.712689 [ 461 ] {} <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:266: StackTrace::StackTrace() @ 0xcd32bc7 in /workspace/clickhouse
2022.02.07 16:15:09.773395 [ 461 ] {} <Fatal> BaseDaemon: 1. ./obj-x86_64-linux-gnu/../base/daemon/BaseDaemon.cpp:0: sanitizerDeathCallback() @ 0x27e0a825 in /workspace/clickhouse
2022.02.07 16:15:13.191721 [ 461 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xcc4bf76 in /workspace/clickhouse
2022.02.07 16:15:16.577540 [ 461 ] {} <Fatal> BaseDaemon: 3. ? @ 0xcc330a1 in /workspace/clickhouse
2022.02.07 16:15:19.964086 [ 461 ] {} <Fatal> BaseDaemon: 4. __asan::ReportGenericError(unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned int, bool) @ 0xcc34be6 in /workspace/clickhouse
2022.02.07 16:15:23.348800 [ 461 ] {} <Fatal> BaseDaemon: 5. __asan_report_load8 @ 0xcc35458 in /workspace/clickhouse
2022.02.07 16:15:23.398293 [ 461 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutingGraph.cpp:0: DB::ExecutingGraph::updateNode(unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&) @ 0x2c832b2b in /workspace/clickhouse
2022.02.07 16:15:23.439585 [ 461 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:232: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x2c82335a in /workspace/clickhouse
2022.02.07 16:15:23.477331 [ 461 ] {} <Fatal> BaseDaemon: 8.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:1006: bool std::__1::__cxx_atomic_load<bool>(std::__1::__cxx_atomic_base_impl<bool> const*, std::__1::memory_order)
2022.02.07 16:15:23.477447 [ 461 ] {} <Fatal> BaseDaemon: 8.2. inlined from ../contrib/libcxx/include/atomic:1615: std::__1::__atomic_base<bool, false>::load(std::__1::memory_order) const
2022.02.07 16:15:23.477513 [ 461 ] {} <Fatal> BaseDaemon: 8.3. inlined from ../contrib/libcxx/include/atomic:1619: std::__1::__atomic_base<bool, false>::operator bool() const
2022.02.07 16:15:23.477575 [ 461 ] {} <Fatal> BaseDaemon: 8.4. inlined from ../src/Processors/Executors/ExecutorTasks.h:49: DB::ExecutorTasks::isFinished() const
2022.02.07 16:15:23.477650 [ 461 ] {} <Fatal> BaseDaemon: 8. ../src/Processors/Executors/PipelineExecutor.cpp:117: DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x2c8227d8 in /workspace/clickhouse
2022.02.07 16:15:23.498075 [ 461 ] {} <Fatal> BaseDaemon: 9.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3211: ~shared_ptr
2022.02.07 16:15:23.498201 [ 461 ] {} <Fatal> BaseDaemon: 9. ../src/Processors/Executors/PushingPipelineExecutor.cpp:73: DB::PushingPipelineExecutor::~PushingPipelineExecutor() @ 0x2c85766e in /workspace/clickhouse
2022.02.07 16:15:23.518487 [ 461 ] {} <Fatal> BaseDaemon: 10.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1397: std::__1::default_delete<DB::PushingPipelineExecutor>::operator()(DB::PushingPipelineExecutor*) const
2022.02.07 16:15:23.518596 [ 461 ] {} <Fatal> BaseDaemon: 10.2. inlined from ../contrib/libcxx/include/memory:1658: std::__1::unique_ptr<DB::PushingPipelineExecutor, std::__1::default_delete<DB::PushingPipelineExecutor> >::reset(DB::PushingPipelineExecutor*)
2022.02.07 16:15:23.518661 [ 461 ] {} <Fatal> BaseDaemon: 10.3. inlined from ../contrib/libcxx/include/memory:1612: ~unique_ptr
2022.02.07 16:15:23.518740 [ 461 ] {} <Fatal> BaseDaemon: 10. ../src/Processors/Transforms/CreatingSetsTransform.cpp:21: DB::CreatingSetsTransform::~CreatingSetsTransform() @ 0x2ccf70e8 in /workspace/clickhouse
2022.02.07 16:15:23.581098 [ 461 ] {} <Fatal> BaseDaemon: 11.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2518: std::__1::__shared_weak_count::__release_shared()
2022.02.07 16:15:23.581212 [ 461 ] {} <Fatal> BaseDaemon: 11.2. inlined from ../contrib/libcxx/include/memory:3212: ~shared_ptr
2022.02.07 16:15:23.581279 [ 461 ] {} <Fatal> BaseDaemon: 11.3. inlined from ../contrib/libcxx/include/memory:891: std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >::destroy(std::__1::shared_ptr<DB::IProcessor>*)
2022.02.07 16:15:23.581358 [ 461 ] {} <Fatal> BaseDaemon: 11.4. inlined from ../contrib/libcxx/include/__memory/allocator_traits.h:539: void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::integral_constant<bool, true>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*)
2022.02.07 16:15:23.581462 [ 461 ] {} <Fatal> BaseDaemon: 11.5. inlined from ../contrib/libcxx/include/__memory/allocator_traits.h:487: void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*)
2022.02.07 16:15:23.581542 [ 461 ] {} <Fatal> BaseDaemon: 11.6. inlined from ../contrib/libcxx/include/vector:428: std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destruct_at_end(std::__1::shared_ptr<DB::IProcessor>*)
2022.02.07 16:15:23.581615 [ 461 ] {} <Fatal> BaseDaemon: 11.7. inlined from ../contrib/libcxx/include/vector:371: std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::clear()
2022.02.07 16:15:23.581683 [ 461 ] {} <Fatal> BaseDaemon: 11.8. inlined from ../contrib/libcxx/include/vector:465: ~__vector_base
2022.02.07 16:15:23.581748 [ 461 ] {} <Fatal> BaseDaemon: 11. ../contrib/libcxx/include/vector:557: std::__1::vector<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~vector() @ 0x2877d03c in /workspace/clickhouse
2022.02.07 16:15:23.645834 [ 461 ] {} <Fatal> BaseDaemon: 12.1. inlined from ./obj-x86_64-linux-gnu/../src/QueryPipeline/QueryPipeline.cpp:0: ~QueryPipeline
2022.02.07 16:15:23.645951 [ 461 ] {} <Fatal> BaseDaemon: 12. ../src/QueryPipeline/QueryPipeline.cpp:535: DB::QueryPipeline::reset() @ 0x287a4187 in /workspace/clickhouse
2022.02.07 16:15:23.755965 [ 461 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/QueryPipeline/BlockIO.h:0: DB::TCPHandler::runImpl() @ 0x2c71514e in /workspace/clickhouse
2022.02.07 16:15:23.939463 [ 461 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1917: DB::TCPHandler::run() @ 0x2c743300 in /workspace/clickhouse
2022.02.07 16:15:23.942586 [ 461 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x333bcb4f in /workspace/clickhouse
2022.02.07 16:15:23.951887 [ 461 ] {} <Fatal> BaseDaemon: 16.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1397: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()(Poco::Net::TCPServerConnection*) const
2022.02.07 16:15:23.951993 [ 461 ] {} <Fatal> BaseDaemon: 16.2. inlined from ../contrib/libcxx/include/memory:1658: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection> >::reset(Poco::Net::TCPServerConnection*)
2022.02.07 16:15:23.952065 [ 461 ] {} <Fatal> BaseDaemon: 16.3. inlined from ../contrib/libcxx/include/memory:1612: ~unique_ptr
2022.02.07 16:15:23.952120 [ 461 ] {} <Fatal> BaseDaemon: 16. ../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x333bd8c1 in /workspace/clickhouse
2022.02.07 16:15:23.962387 [ 461 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x33897de6 in /workspace/clickhouse
2022.02.07 16:15:23.972231 [ 461 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:0: Poco::ThreadImpl::runnableEntry(void*) @ 0x3389109a in /workspace/clickhouse
2022.02.07 16:15:23.972328 [ 461 ] {} <Fatal> BaseDaemon: 19. ? @ 0x7f63f6d3e609 in ?
2022.02.07 16:15:23.972378 [ 461 ] {} <Fatal> BaseDaemon: 20. clone @ 0x7f63f6c65293 in ?
2022.02.07 16:15:24.795161 [ 461 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 86CD2A70EB48418C857B254DE2A14EAD. There is no information about the reference checksum.
```
report
```
==99==ERROR: AddressSanitizer: heap-use-after-free on address 0x6130007b94d8 at pc 0x00002c832b2b bp 0x7f630ed28d70 sp 0x7f630ed28d68
READ of size 8 at 0x6130007b94d8 thread T3 (TCPHandler)
#0 0x2c832b2a in DB::ExecutingGraph::updateNode(unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&) obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutingGraph.cpp:266:66
#1 0x2c823359 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:232:29
#2 0x2c8227d7 in DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:115:5
#3 0x2c85766d in DB::PushingPipelineExecutor::finish() obj-x86_64-linux-gnu/../src/Processors/Executors/PushingPipelineExecutor.cpp:118:19
#4 0x2c85766d in DB::PushingPipelineExecutor::~PushingPipelineExecutor() obj-x86_64-linux-gnu/../src/Processors/Executors/PushingPipelineExecutor.cpp:67:9
#5 0x2ccf70e7 in std::__1::default_delete<DB::PushingPipelineExecutor>::operator()(DB::PushingPipelineExecutor*) const obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1397:5
#6 0x2ccf70e7 in std::__1::unique_ptr<DB::PushingPipelineExecutor, std::__1::default_delete<DB::PushingPipelineExecutor> >::reset(DB::PushingPipelineExecutor*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1658:7
#7 0x2ccf70e7 in std::__1::unique_ptr<DB::PushingPipelineExecutor, std::__1::default_delete<DB::PushingPipelineExecutor> >::~unique_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:1612:19
#8 0x2ccf70e7 in DB::CreatingSetsTransform::~CreatingSetsTransform() obj-x86_64-linux-gnu/../src/Processors/Transforms/CreatingSetsTransform.cpp:21:47
#9 0x2877d03b in std::__1::__shared_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2475:9
#10 0x2877d03b in std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2517:27
#11 0x2877d03b in std::__1::shared_ptr<DB::IProcessor>::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19
#12 0x2877d03b in std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >::destroy(std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:891:15
#13 0x2877d03b in void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::integral_constant<bool, true>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:539:21
#14 0x2877d03b in void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:487:14
#15 0x2877d03b in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destruct_at_end(std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:428:9
#16 0x2877d03b in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::clear() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:371:29
#17 0x2877d03b in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~__vector_base() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:465:9
#18 0x2877d03b in std::__1::vector<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~vector() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:557:5
#19 0x287a4186 in DB::QueryPipeline::~QueryPipeline() obj-x86_64-linux-gnu/../src/QueryPipeline/QueryPipeline.cpp:29:31
#20 0x287a4186 in DB::QueryPipeline::reset() obj-x86_64-linux-gnu/../src/QueryPipeline/QueryPipeline.cpp:535:1
#21 0x2c71514d in DB::BlockIO::onException() obj-x86_64-linux-gnu/../src/QueryPipeline/BlockIO.h:48:18
#22 0x2c71514d in DB::TCPHandler::runImpl() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:398:22
#23 0x2c7432ff in DB::TCPHandler::run() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1915:9
#24 0x333bcb4e in Poco::Net::TCPServerConnection::start() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#25 0x333bd8c0 in Poco::Net::TCPServerDispatcher::run() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#26 0x33897de5 in Poco::PooledThread::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#27 0x33891099 in Poco::ThreadImpl::runnableEntry(void*) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#28 0x7f63f6d3e608 in start_thread /build/glibc-eX1tMB/glibc-2.31/nptl/pthread_create.c:477:8
#29 0x7f63f6c65292 in __clone /build/glibc-eX1tMB/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
0x6130007b94d8 is located 24 bytes inside of 352-byte region [0x6130007b94c0,0x6130007b9620)
freed by thread T3 (TCPHandler) here:
#0 0xcc61b62 in operator delete(void*, unsigned long) (/workspace/clickhouse+0xcc61b62)
#1 0x2877d043 in std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2518:9
#2 0x2877d043 in std::__1::shared_ptr<DB::IProcessor>::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19
#3 0x2877d043 in std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >::destroy(std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:891:15
#4 0x2877d043 in void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::integral_constant<bool, true>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:539:21
#5 0x2877d043 in void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:487:14
#6 0x2877d043 in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destruct_at_end(std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:428:9
#7 0x2877d043 in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::clear() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:371:29
#8 0x2877d043 in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~__vector_base() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:465:9
#9 0x2877d043 in std::__1::vector<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~vector() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:557:5
#10 0x28797451 in DB::QueryPipeline::~QueryPipeline() obj-x86_64-linux-gnu/../src/QueryPipeline/QueryPipeline.cpp:29:31
#11 0x2ccf70b0 in DB::CreatingSetsTransform::~CreatingSetsTransform() obj-x86_64-linux-gnu/../src/Processors/Transforms/CreatingSetsTransform.cpp:21:47
#12 0x2877d03b in std::__1::__shared_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2475:9
#13 0x2877d03b in std::__1::__shared_weak_count::__release_shared() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2517:27
#14 0x2877d03b in std::__1::shared_ptr<DB::IProcessor>::~shared_ptr() obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3212:19
#15 0x2877d03b in std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >::destroy(std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:891:15
#16 0x2877d03b in void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::integral_constant<bool, true>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:539:21
#17 0x2877d03b in void std::__1::allocator_traits<std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::destroy<std::__1::shared_ptr<DB::IProcessor> >(std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> >&, std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:487:14
#18 0x2877d03b in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::__destruct_at_end(std::__1::shared_ptr<DB::IProcessor>*) obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:428:9
#19 0x2877d03b in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::clear() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:371:29
#20 0x2877d03b in std::__1::__vector_base<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~__vector_base() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:465:9
#21 0x2877d03b in std::__1::vector<std::__1::shared_ptr<DB::IProcessor>, std::__1::allocator<std::__1::shared_ptr<DB::IProcessor> > >::~vector() obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:557:5
#22 0x287a4186 in DB::QueryPipeline::~QueryPipeline() obj-x86_64-linux-gnu/../src/QueryPipeline/QueryPipeline.cpp:29:31
#23 0x287a4186 in DB::QueryPipeline::reset() obj-x86_64-linux-gnu/../src/QueryPipeline/QueryPipeline.cpp:535:1
#24 0x2c71514d in DB::BlockIO::onException() obj-x86_64-linux-gnu/../src/QueryPipeline/BlockIO.h:48:18
#25 0x2c71514d in DB::TCPHandler::runImpl() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:398:22
#26 0x2c7432ff in DB::TCPHandler::run() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1915:9
#27 0x333bcb4e in Poco::Net::TCPServerConnection::start() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#28 0x333bd8c0 in Poco::Net::TCPServerDispatcher::run() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#29 0x33897de5 in Poco::PooledThread::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#30 0x33891099 in Poco::ThreadImpl::runnableEntry(void*) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#31 0x7f63f6d3e608 in start_thread /build/glibc-eX1tMB/glibc-2.31/nptl/pthread_create.c:477:8
previously allocated by thread T235 (QueryPipelineEx) here:
#0 0xcc60efd in operator new(unsigned long) (/workspace/clickhouse+0xcc60efd)
#1 0x2b4bcb34 in void* std::__1::__libcpp_operator_new<unsigned long>(unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:235:10
#2 0x2b4bcb34 in std::__1::__libcpp_allocate(unsigned long, unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/new:261:10
#3 0x2b4bcb34 in std::__1::allocator<std::__1::__shared_ptr_emplace<DB::MemorySink, std::__1::allocator<DB::MemorySink> > >::allocate(unsigned long) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:840:38
#4 0x2b4bcb34 in std::__1::allocator_traits<std::__1::allocator<std::__1::__shared_ptr_emplace<DB::MemorySink, std::__1::allocator<DB::MemorySink> > > >::allocate(std::__1::allocator<std::__1::__shared_ptr_emplace<DB::MemorySink, std::__1::allocator<DB::MemorySink> > >&, unsigned l
ong) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:468:21
#5 0x2b4bcb34 in std::__1::__allocation_guard<std::__1::allocator<std::__1::__shared_ptr_emplace<DB::MemorySink, std::__1::allocator<DB::MemorySink> > > >::__allocation_guard<std::__1::allocator<DB::MemorySink> >(std::__1::allocator<DB::MemorySink>, unsigned long) obj-x86_64-linux-
gnu/../contrib/libcxx/include/__memory/utilities.h:56:18
#6 0x2b4bcb34 in std::__1::shared_ptr<DB::MemorySink> std::__1::allocate_shared<DB::MemorySink, std::__1::allocator<DB::MemorySink>, DB::StorageMemory&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, void>(std::__1::allocator<DB::MemorySink> const&, DB::StorageMemo
ry&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3359:48
#7 0x2b4b523f in std::__1::shared_ptr<DB::MemorySink> std::__1::make_shared<DB::MemorySink, DB::StorageMemory&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, void>(DB::StorageMemory&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) obj-x86_64-linux
-gnu/../contrib/libcxx/include/memory:3369:12
#8 0x2b4b523f in DB::StorageMemory::write(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>) obj-x86_64-linux-gnu/../src/Storages/StorageMemory.cpp:228:12
#9 0x2ccf9338 in DB::CreatingSetsTransform::startSubquery() obj-x86_64-linux-gnu/../src/Processors/Transforms/CreatingSetsTransform.cpp:53:51
#10 0x2ccf8368 in DB::CreatingSetsTransform::init() obj-x86_64-linux-gnu/../src/Processors/Transforms/CreatingSetsTransform.cpp:93:5
#11 0x2ccf8368 in DB::CreatingSetsTransform::work() obj-x86_64-linux-gnu/../src/Processors/Transforms/CreatingSetsTransform.cpp:39:9
#12 0x2c842965 in DB::executeJob(DB::IProcessor*) obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:45:20
#13 0x2c842965 in DB::ExecutionThreadContext::executeTask() obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:63:9
#14 0x2c823226 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:213:26
#15 0x2c825f00 in DB::PipelineExecutor::executeSingleThread(unsigned long) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:178:5
#16 0x2c825f00 in DB::PipelineExecutor::executeImpl(unsigned long)::$_1::operator()() const obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:306:21
#17 0x2c825f00 in decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&) obj-x86_64-linux-gnu/../contrib/l
ibcxx/include/type_traits:3682:1
#18 0x2c825f00 in decltype(auto) std::__1::__apply_tuple_impl<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/
include/tuple:1415:1
#19 0x2c825f00 in decltype(auto) std::__1::apply<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1
#20 0x2c825f00 in ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:188:13
#21 0x2c825f00 in decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::
'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1
#22 0x2c825f00 in void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(ThreadFromGlobalPool::ThreadFromGloba
lPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9
#23 0x2c825f00 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contri
b/libcxx/include/functional:1608:12
#24 0x2c825f00 in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&):
:'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16
#25 0xce47786 in std::__1::__function::__policy_func<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16
#26 0xce47786 in std::__1::function<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12
#27 0xce47786 in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:277:17
#28 0xce4fd16 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:142:73
#29 0xce4fd16 in decltype(std::__1::forward<void>(fp)()) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_trait
s:3676:1
#30 0xce4fd16 in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'l
ambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5
#31 0xce4fd16 in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsi
gned long>)::'lambda0'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5
#32 0x7f63f6d3e608 in start_thread /build/glibc-eX1tMB/glibc-2.31/nptl/pthread_create.c:477:8
Thread T3 (TCPHandler) created by T0 here:
#0 0xcc197dc in pthread_create (/workspace/clickhouse+0xcc197dc)
#1 0x33890349 in Poco::ThreadImpl::startImpl(Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:202:6
#2 0x338933dd in Poco::Thread::start(Poco::Runnable&) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:128:2
#3 0x338987d9 in Poco::PooledThread::start() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:85:10
#4 0x338987d9 in Poco::ThreadPool::ThreadPool(int, int, int, int) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:252:12
#5 0xcc7643b in DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) obj-x86_64-linux-gnu/../pro
grams/server/Server.cpp:585:22
#6 0x333fddba in Poco::Util::Application::run() obj-x86_64-linux-gnu/../contrib/poco/Util/src/Application.cpp:334:8
#7 0xcc6f7d8 in DB::Server::run() obj-x86_64-linux-gnu/../programs/server/Server.cpp:452:25
#8 0x3344343d in Poco::Util::ServerApplication::run(int, char**) obj-x86_64-linux-gnu/../contrib/poco/Util/src/ServerApplication.cpp:611:9
#9 0xcc68f3d in mainEntryClickHouseServer(int, char**) obj-x86_64-linux-gnu/../programs/server/Server.cpp:183:20
#10 0xcc64ac8 in main obj-x86_64-linux-gnu/../programs/main.cpp:378:12
#11 0x7f63f6b6a0b2 in __libc_start_main /build/glibc-eX1tMB/glibc-2.31/csu/../csu/libc-start.c:308:16
Thread T235 (QueryPipelineEx) created by T3 (TCPHandler) here:
#0 0xcc197dc in pthread_create (/workspace/clickhouse+0xcc197dc)
#1 0xce4ecf6 in std::__1::__libcpp_thread_create(unsigned long*, void* (*)(void*), void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:509:10
#2 0xce4ecf6 in std::__1::thread::thread<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'(), void>(void&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:307:16
#3 0xce44059 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:142:35
#4 0xce52785 in ThreadPoolImpl<std::__1::thread>::scheduleOrThrow(std::__1::function<void ()>, int, unsigned long) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:174:5
#5 0xce52785 in ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&) obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:169:38
#6 0xce4941f in void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:142:35
#7 0xce48ab5 in ThreadPoolImpl<ThreadFromGlobalPool>::scheduleOrThrowOnError(std::__1::function<void ()>, int) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:162:5
#8 0x2bf72a14 in DB::MergeTreeDataSelectExecutor::filterPartsByPrimaryKeyAndSkipIndexes(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> > >&&, std::__1::shared_ptr<DB::StorageInMemoryMetadat
a const>, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const> const&, DB::KeyCondition const&, DB::MergeTreeReaderSettings const&, Poco::Logger*, unsigned long, std::__1::vector<DB::ReadFromMergeTree::IndexStat, std::__1::allocator<DB::ReadFromMergeTree::IndexStat> >&,
bool) obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp:988:22
#9 0x2cfe59a0 in DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> > >, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shar
ed_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned int, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, long, std::__1::hash<
std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<cha
r>, std::__1::allocator<char> > const, long> > > >, DB::MergeTreeData const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>
> > > const&, bool, Poco::Logger*) obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:928:36
#10 0x2cfe8a2a in DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> > >) const obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:
825:12
#11 0x2cfe8a2a in DB::ReadFromMergeTree::getAnalysisResult() const obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:983:67
#12 0x2cfe9128 in DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:992:19
#13 0x2cf615f1 in DB::ISourceStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder> >, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuild
er> > > >, DB::BuildQueryPipelineSettings const&) obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ISourceStep.cpp:16:5
#14 0x2cfb52e9 in DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) obj-x86_64-linux-gnu/../src/Processors/QueryPlan/QueryPlan.cpp:169:47
#15 0x2a4e936e in DB::InterpreterSelectWithUnionQuery::execute() obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:333:40
#16 0x2aac558c in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:666:36
#17 0x2aac06f0 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:985:30
#18 0x2c7123f5 in DB::TCPHandler::runImpl() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:333:24
#19 0x2c7432ff in DB::TCPHandler::run() obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1915:9
#20 0x333bcb4e in Poco::Net::TCPServerConnection::start() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#21 0x333bd8c0 in Poco::Net::TCPServerDispatcher::run() obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#22 0x33897de5 in Poco::PooledThread::run() obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#23 0x33891099 in Poco::ThreadImpl::runnableEntry(void*) obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#24 0x7f63f6d3e608 in start_thread /build/glibc-eX1tMB/glibc-2.31/nptl/pthread_create.c:477:8
SUMMARY: AddressSanitizer: heap-use-after-free obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutingGraph.cpp:266:66 in DB::ExecutingGraph::updateNode(unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::Execu
tingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&)
Shadow bytes around the buggy address:
0x0c26800ef240: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c26800ef250: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c26800ef260: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c26800ef270: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c26800ef280: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c26800ef290: fa fa fa fa fa fa fa fa fd fd fd[fd]fd fd fd fd
0x0c26800ef2a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c26800ef2b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c26800ef2c0: fd fd fd fd fa fa fa fa fa fa fa fa fa fa fa fa
0x0c26800ef2d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c26800ef2e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==99==ABORTING
``` | https://github.com/ClickHouse/ClickHouse/issues/34400 | https://github.com/ClickHouse/ClickHouse/pull/34406 | fe7cdd14f71b9eedf5c5ed8c00a21ca7848c1ad5 | 7a30d490a18299543a4e8e22cc0b5b9905df8db5 | "2022-02-08T08:18:14Z" | c++ | "2022-02-10T10:24:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,362 | ["src/Server/GRPCServer.cpp", "src/Server/grpc_protos/clickhouse_grpc.proto", "tests/integration/test_grpc_protocol/test.py"] | Return format in gRPC response | Hi @vitlibar,
In http implementation, ClickHouse will return format of query result in header `X-ClickHouse-Format`. Can we return format in gRPC response as well? This helps client like JDBC driver for deserialization(without the need for parsing the query). | https://github.com/ClickHouse/ClickHouse/issues/34362 | https://github.com/ClickHouse/ClickHouse/pull/34499 | 340614e5eca00ecf9452da0dcd19f041f6f0b40a | 91bc9cd4cf7ecc8435b1250e497ecad22dc7ed42 | "2022-02-07T04:52:04Z" | c++ | "2022-02-13T14:34:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,350 | ["src/Disks/S3/DiskS3.cpp", "src/Interpreters/threadPoolCallbackRunner.cpp", "src/Interpreters/threadPoolCallbackRunner.h", "src/Storages/StorageS3.cpp"] | StorageS3 does not perform parallel multipart upload. | **Describe the situation**
The author implemented it but enabled only for DiskS3.
CC @excitoon | https://github.com/ClickHouse/ClickHouse/issues/34350 | https://github.com/ClickHouse/ClickHouse/pull/35343 | 66a6352378fe495b1fef550a3c5e084de4aa29df | a90e83665dda049c4f9e89093dd081ddf9cf3508 | "2022-02-06T03:28:15Z" | c++ | "2022-03-24T14:58:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,335 | ["src/Databases/DatabaseMemory.cpp", "tests/queries/0_stateless/01069_database_memory.reference", "tests/queries/0_stateless/02021_create_database_with_comment.reference", "tests/queries/0_stateless/02206_information_schema_show_database.reference", "tests/queries/0_stateless/02206_information_schema_show_database.sql"] | SHOW CREATE DATABASE information_schema return wrong SQL `ENGINE=Memory()` instead of `ENGINE=Memory` | **Describe what's wrong**
inconsistent behavior
```sql
SHOW CREATE DATABASE information_schema
```
returns the following SQL statement
```sql
CREATE DATABASE information_schema
ENGINE = Memory()
```
but when you try to run these queries
```sql
DROP DATABASE information_schema;
CREATE DATABASE information_schema
ENGINE = Memory()
```
It returns
> Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Database engine `Memory` cannot have arguments. (BAD_ARGUMENTS)
**Does it reproduce on recent release?**
reproduced any version from 21.11+
| https://github.com/ClickHouse/ClickHouse/issues/34335 | https://github.com/ClickHouse/ClickHouse/pull/34345 | 43ee8ddb5bbc1c2807f2d5a5d9e5b9627e1bb7b4 | 51c767d107ce691e5700bf1c11ca2bbb6f141d6f | "2022-02-05T13:03:35Z" | c++ | "2022-02-07T23:45:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,324 | ["src/Common/ProgressIndication.cpp"] | 22.1: progress bar is jumping forward when around 50% | **Describe the issue**
Maybe it's a bug in my implementation of #33271
**How to reproduce**
Run some long query and look carefully with full attention. | https://github.com/ClickHouse/ClickHouse/issues/34324 | https://github.com/ClickHouse/ClickHouse/pull/34801 | f85d8cd3b3c8b729b81a434e7b85baba2e77ee77 | 5621c334e4b8b1950264101106a8de7ce3fe7288 | "2022-02-04T19:35:16Z" | c++ | "2022-02-22T21:37:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,303 | ["src/IO/ReadHelpers.cpp", "src/IO/ReadHelpers.h", "src/Processors/Formats/Impl/LineAsStringRowInputFormat.cpp", "src/Processors/Formats/Impl/RawBLOBRowInputFormat.cpp"] | `ProfileEvents::increment` can be in top if you run a query with huge number of threads. | **Describe the situation**
https://markify.dev/p/zfrQMV2/text
**How to reproduce**
On aws `c5ad.24xlarge` run a query:
```
SET max_memory_usage = '100G', max_threads = 1000, max_insert_threads = 96
INSERT INTO wikistat WITH
parseDateTimeBestEffort(extract(toLowCardinality(_path), 'pageviews-([\\d\\-]+)\\.gz$')) AS time,
splitByChar(' ', line) AS values,
splitByChar('.', values[1]) AS projects
SELECT
time,
projects[1] AS project,
projects[2] AS subproject,
values[2] AS path,
CAST(values[3], 'UInt64') AS hits
FROM s3('https://....s3.eu-central-1.amazonaws.com/wikistat/original/*.gz', '...', '...', LineAsString, auto)
WHERE length(values) >= 3
```
Performance is not bad (22 million rows are inserted per second) but can be better. | https://github.com/ClickHouse/ClickHouse/issues/34303 | https://github.com/ClickHouse/ClickHouse/pull/34306 | ab696e6b59456b30385fa66d9f6c5822e8998c5e | 074b827cf35a1d6700ed3f472b9067ee5bf32308 | "2022-02-03T20:30:00Z" | c++ | "2022-02-04T12:11:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,302 | ["src/Interpreters/InterpreterDropQuery.cpp"] | TRUNCATE TABLE does not cancel merges | **Describe the situation**
```
ip-172-31-22-38.eu-central-1.compute.internal :) TRUNCATE wikistat
TRUNCATE TABLE wikistat
Query id: 3a045ee6-201c-4551-9bc1-3cda53376ec2
0 rows in set. Elapsed: 120.008 sec.
Received exception from server (version 22.2.1):
Code: 473. DB::Exception: Received from localhost:9000. DB::Exception: WRITE locking attempt on "default.wikistat" has timed out! (120000ms) Possible deadlock avoided. Client should retry.. (DEADLOCK_AVOIDED)
ip-172-31-22-38.eu-central-1.compute.internal :) TRUNCATE wikistat
TRUNCATE TABLE wikistat
Query id: 4fae1b33-c9cb-45de-8ff2-140d9cba3aa2
Ok.
0 rows in set. Elapsed: 31.647 sec.
ip-172-31-22-38.eu-central-1.compute.internal :)
```
```
ββarrayJoin(arrayMap(lambda(tuple(x), demangle(addressToSymbol(x))), trace))ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β std::__1::condition_variable::__do_timed_wait(std::__1::unique_lock<std::__1::mutex>&, std::__1::chrono::time_point<std::__1::chrono::system_clock, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000000000l> > >) β
β DB::RWLockImpl::getLock(DB::RWLockImpl::Type, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000l> > const&) β
β DB::IStorage::tryLockTimed(std::__1::shared_ptr<DB::RWLockImpl> const&, DB::RWLockImpl::Type, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000l> > const&) const β
β DB::IStorage::lockExclusively(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000l> > const&) β
β DB::InterpreterDropQuery::executeToTableImpl(std::__1::shared_ptr<DB::Context const>, DB::ASTDropQuery&, std::__1::shared_ptr<DB::IDatabase>&, StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag>&) β
β DB::InterpreterDropQuery::executeToTable(DB::ASTDropQuery&) β
β DB::InterpreterDropQuery::execute() β
β DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) β
β DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) β
β DB::TCPHandler::runImpl() β
β DB::TCPHandler::run() β
β Poco::Net::TCPServerConnection::start() β
β Poco::Net::TCPServerDispatcher::run() β
β Poco::PooledThread::run() β
β Poco::ThreadImpl::runnableEntry(void*)
``` | https://github.com/ClickHouse/ClickHouse/issues/34302 | https://github.com/ClickHouse/ClickHouse/pull/34304 | cf8c76f85976debf341a609fc3ee5cc85a3ed650 | ea3ccd2431c810dd0d28234d5e71959aefae4ee8 | "2022-02-03T19:49:01Z" | c++ | "2022-02-04T09:32:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,301 | ["src/Storages/HDFS/StorageHDFS.cpp", "src/Storages/StorageS3.cpp"] | Query metrics are not shown on progress bar for INSERT SELECT. | This query does not show realtime metrics:
```
INSERT INTO wikistat WITH
parseDateTimeBestEffort(extract(toLowCardinality(_path), 'pageviews-([\\d\\-]+)\\.gz$')) AS time,
splitByChar(' ', line) AS values,
splitByChar('.', values[1]) AS projects
SELECT
time,
projects[1] AS project,
projects[2] AS subproject,
values[2] AS path,
CAST(values[3], 'UInt64') AS hits
FROM s3('https://....s3.eu-central-1.amazonaws.com/wikistat/original/*.gz', '...', '...', LineAsString, auto)
WHERE length(values) >= 3
```
Although this query does:
```
WITH
parseDateTimeBestEffort(extract(toLowCardinality(_path), 'pageviews-([\\d\\-]+)\\.gz$')) AS time,
splitByChar(' ', line) AS values,
splitByChar('.', values[1]) AS projects
SELECT
time,
projects[1] AS project,
projects[2] AS subproject,
values[2] AS path,
CAST(values[3], 'UInt64') AS hits
FROM s3('https://....s3.eu-central-1.amazonaws.com/wikistat/original/*.gz', '...', '...', LineAsString, auto)
WHERE length(values) >= 3
```
I also was not able to cancel the first query. | https://github.com/ClickHouse/ClickHouse/issues/34301 | https://github.com/ClickHouse/ClickHouse/pull/34539 | c2f6d803f79db80b2c2d16e2a85b6c5ab5614a86 | d680a017e09f409da64a71f994a3a49ec5b8f2ab | "2022-02-03T19:36:35Z" | c++ | "2022-02-11T23:36:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,300 | ["src/DataTypes/DataTypeLowCardinality.h", "src/Storages/FileLog/StorageFileLog.cpp", "src/Storages/HDFS/StorageHDFS.cpp", "src/Storages/HDFS/StorageHDFSCluster.cpp", "src/Storages/Hive/StorageHive.cpp", "src/Storages/Kafka/StorageKafka.cpp", "src/Storages/StorageDistributed.cpp", "src/Storages/StorageS3.cpp", "src/Storages/StorageS3Cluster.cpp"] | `_path` virtual column should be `LowCardinality` | ```
ubuntu@ip-172-31-79-147:/opt/downloads$ clickhouse-local --max_threads 8 --query "WITH parseDateTimeBestEffort(extract(_path, 'pageviews-([\\d\\-]+)\\.gz$')) AS time, splitByChar(' ', line) AS values, splitByChar('.', values[1]) AS projects SELECT time, projects[1] AS project, projects[2] AS subproject, values[2] AS path, (values[3])::UInt64 AS hits FROM file('pageviews*.gz', LineAsString) WHERE length(values) >= 3" | pv > /dev/null
^C01GiB 0:01:08 [ 129MiB/s] [ <=> ]
ubuntu@ip-172-31-79-147:/opt/downloads$ clickhouse-local --max_threads 8 --query "WITH parseDateTimeBestEffort(extract(toLowCardinality(_path), 'pageviews-([\\d\\-]+)\\.gz$')) AS time, splitByChar(' ', line) AS values, splitByChar('.', values[1]) AS projects SELECT time, projects[1] AS project, projects[2] AS subproject, values[2] AS path, (values[3])::UInt64 AS hits FROM file('pageviews*.gz', LineAsString) WHERE length(values) >= 3" | pv > /dev/null
^C46GiB 0:00:48 [ 175MiB/s] [ <=> ]
```
| https://github.com/ClickHouse/ClickHouse/issues/34300 | https://github.com/ClickHouse/ClickHouse/pull/34333 | 1482d61bef01cad3b7ab6951e908d67fe8c578ae | 58bb0eb2d1bad5785e9a2e4c05e3b485127caa3a | "2022-02-03T19:03:13Z" | c++ | "2022-02-06T23:55:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,281 | ["src/Access/Common/AccessType.h", "src/Access/ContextAccess.cpp", "src/Access/tests/gtest_access_rights_ops.cpp", "tests/queries/0_stateless/01271_show_privileges.reference"] | Wrong privelege level for CREATE / DROP FUNCTION in system.privileges | **Describe what's wrong**
UDF functions are global objects. Creating or dropping them requires global level privilege.
- https://clickhouse.com/docs/en/sql-reference/statements/create/function/
- https://github.com/ClickHouse/ClickHouse/blob/14811a357e2136ff2940d791f2dbe59c65e87601/tests/integration/test_access_for_functions/test.py#L22
But "system.privileges" table reports DATABASE level for these privileges.
```
SELECT
privilege,
level
FROM system.privileges
WHERE CAST(privilege, 'text') LIKE '%FUNCTION%'
ββprivilegeβββββββββββββββ¬βlevelβββββ
β CREATE FUNCTION β DATABASE β
β DROP FUNCTION β DATABASE β
β SYSTEM RELOAD FUNCTION β GLOBAL β
ββββββββββββββββββββββββββ΄βββββββββββ
```
**Does it reproduce on recent release?**
Actual for the latest versions and master. In particular, for 21.12.4.1.
| https://github.com/ClickHouse/ClickHouse/issues/34281 | https://github.com/ClickHouse/ClickHouse/pull/34404 | 691cb3352b607b5c028be2481d7f730d8d85fb2f | 10439c9d3f444bfa71974db9110970699a31455d | "2022-02-03T10:47:13Z" | c++ | "2022-02-10T23:21:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,244 | ["src/Core/Settings.h", "src/Disks/S3/DiskS3.cpp", "src/Disks/S3/DiskS3.h", "src/Disks/S3/registerDiskS3.cpp", "src/IO/WriteBufferFromS3.cpp", "src/IO/WriteBufferFromS3.h", "src/Storages/StorageS3.cpp", "src/Storages/StorageS3.h", "src/TableFunctions/TableFunctionS3.cpp", "src/TableFunctions/TableFunctionS3Cluster.cpp"] | S3 table function can't upload file while aws s3 CLI can | I have insert to s3 6bil rows table:
```
INSERT INTO FUNCTION
s3('https://s3.us-east-1.amazonaws.com/my-bucket/data.csv', 'DFGHJ', 'FGHJIJH, 'CSVWithNames', 'time DateTime, exchangeId UInt16, pairId UInt16, id String, price Decimal(38, 18), volume Decimal(38, 18)')
SELECT * FROM trade;
```
but it fails trying to upload part 10001 while aws s3 upload [supports](https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html) only 10k parts.
I had to export this select into CSV 600GB file on the disk and then perform successful `aws s3 cp data.csv s3://my-bucket/data.csv`
I expected Clickhouse would do it on his own successfully.
ββversion()ββ
β 22.1.3.7 β
βββββββββββββ
| https://github.com/ClickHouse/ClickHouse/issues/34244 | https://github.com/ClickHouse/ClickHouse/pull/34422 | 05b89ee865a82fd714e9b331201c7beb6d00bb9a | 437940b29d2e8e95d214ecfbdfcb66b52c7b2cdd | "2022-02-02T08:30:41Z" | c++ | "2022-02-09T09:51:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,239 | ["src/IO/ReadWriteBufferFromHTTP.h", "tests/queries/0_stateless/02205_HTTP_user_agent.python", "tests/queries/0_stateless/02205_HTTP_user_agent.reference", "tests/queries/0_stateless/02205_HTTP_user_agent.sh"] | forbidden requests due to missing user-agent header in URL engine | Some URLs expect a User-Agent header.
Example (version 22.2.1.3767):
```sql
SELECT * FROM url('https://api.github.com/users/clickhouse','JSONAsString');
```
Results in:
```
Code: 86. DB::Exception: Received error from remote server /users/clickhouse. HTTP status code: 403 Forbidden, body:
Request forbidden by administrative rules. Please make sure your request has a User-Agent header (http://developer.github.com/v3/#user-agent-required). Check https://developer.github.com for other possible causes.
: While executing URL. (RECEIVED_ERROR_FROM_REMOTE_IO_SERVER) (version 22.2.1.3767)
```
Maybe it's nice to always send a header along, something like `User-Agent: ClickHouse/22.2.1.3767` ?
| https://github.com/ClickHouse/ClickHouse/issues/34239 | https://github.com/ClickHouse/ClickHouse/pull/34330 | 4a2c69c0735cacf3ae1c97d0766134cc11934f76 | 03f81c86855883450c0224b76867d6b8b0e2c5c7 | "2022-02-01T20:39:59Z" | c++ | "2022-02-12T21:40:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,206 | ["src/IO/parseDateTimeBestEffort.cpp", "tests/queries/0_stateless/02191_parse_date_time_best_effort_more_cases.reference", "tests/queries/0_stateless/02191_parse_date_time_best_effort_more_cases.sql"] | parseDateTimeBestEffort can support format `yymmdd-hhmmss` | **Describe the issue**
```
milovidov-desktop :) SELECT parseDateTimeBestEffort('20220101-010203')
SELECT parseDateTimeBestEffort('20220101-010203')
0 rows in set. Elapsed: 0.153 sec.
Received exception:
Code: 41. DB::Exception: Cannot read DateTime: unexpected number of decimal digits for time zone offset: 6: While processing parseDateTimeBestEffort('20220101-010203'). (CANNOT_PARSE_DATETIME)
``` | https://github.com/ClickHouse/ClickHouse/issues/34206 | https://github.com/ClickHouse/ClickHouse/pull/34208 | 798e0e8242a7125cfca2c0362e8de201b1d92482 | 2b1d1a9a6f00a1267d0814b349e87fde936a7842 | "2022-02-01T00:01:26Z" | c++ | "2022-02-01T13:22:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,204 | ["src/Parsers/ExpressionListParsers.cpp", "src/Parsers/ExpressionListParsers.h", "tests/queries/0_stateless/01852_cast_operator_4.reference", "tests/queries/0_stateless/01852_cast_operator_4.sql"] | Casting and array index operators are not composable | **Describe the issue**
```
SELECT x[1]::UInt64
Syntax error: failed at position 12 ('::')
```
| https://github.com/ClickHouse/ClickHouse/issues/34204 | https://github.com/ClickHouse/ClickHouse/pull/34229 | 1c461e9ed1e051efc5ab546200f56ac23550dcac | b21adb8e110f5c74206a1cb1ab87fc2e690b5f43 | "2022-01-31T23:41:22Z" | c++ | "2022-02-08T00:17:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,200 | ["src/Processors/Formats/Impl/RegexpRowInputFormat.cpp", "src/Processors/Formats/Impl/RegexpRowInputFormat.h", "tests/queries/0_stateless/02190_format_regexp_cr_in_the_middle.reference", "tests/queries/0_stateless/02190_format_regexp_cr_in_the_middle.sh", "utils/CMakeLists.txt", "utils/convert-month-partitioned-parts/CMakeLists.txt", "utils/convert-month-partitioned-parts/main.cpp"] | Format `Regexp` is unusable | ```
clickhouse-local \
--format_regexp_escaping_rule 'Raw' \
--format_regexp_skip_unmatched 1 \
--format_regexp '^([^ \.]+)(\.[^ ]+)? +([^ ]+) +(\d+) +(\d+)$' \
--query "
SELECT replaceRegexpOne(_path, '^.+pageviews-(\\d{4})(\\d{2})(\\d{2})-(\\d{2})(\\d{2})(\\d{2}).gz$', '\1-\2-\3 \4-\5-\6')::DateTime AS time, *
FROM file('pageviews*.gz', Regexp, 'project String, subproject String, path String, hits UInt64, size UInt64')"
```
```
Code: 117. DB::Exception: No \n after \r at the end of line.: (at row 148660)
: While executing ParallelParsingBlockInputFormat: While executing File. (INCORRECT_DATA)
```
```
clickhouse-local \
--format_regexp_escaping_rule 'Raw' \
--format_regexp_skip_unmatched 1 \
--format_regexp '^([^ \.]+)(\.[^ ]+)? +([^ ]+) +(\d+) +(\d+).*?$' \
--query "
SELECT replaceRegexpOne(_path, '^.+pageviews-(\\d{4})(\\d{2})(\\d{2})-(\\d{2})(\\d{2})(\\d{2}).gz$', '\1-\2-\3 \4-\5-\6')::DateTime AS time, *
FROM file('pageviews*.gz', Regexp, 'project String, subproject String, path String, hits UInt64, size UInt64')"
```
```
Code: 117. DB::Exception: No \n after \r at the end of line.: (at row 148660)
: While executing ParallelParsingBlockInputFormat: While executing File. (INCORRECT_DATA)
``` | https://github.com/ClickHouse/ClickHouse/issues/34200 | https://github.com/ClickHouse/ClickHouse/pull/34205 | a93aecf1cb9fc03bc6c71a1b860efda5edc5ba66 | 798e0e8242a7125cfca2c0362e8de201b1d92482 | "2022-01-31T22:47:55Z" | c++ | "2022-02-01T13:22:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,155 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayFold.cpp", "tests/performance/array_fold.xml", "tests/queries/0_stateless/02718_array_fold.reference", "tests/queries/0_stateless/02718_array_fold.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | `arrayFold` function | See
#21589
#23248
#27270 | https://github.com/ClickHouse/ClickHouse/issues/34155 | https://github.com/ClickHouse/ClickHouse/pull/49794 | e633f36ce91a2ae335010548638aac79bca3aef9 | 624dbcdb4f95acb5400523b6189e74f761ffa6bd | "2022-01-30T15:15:18Z" | c++ | "2023-10-09T11:38:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,148 | ["src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp", "src/Storages/MergeTree/MergeTreeBackgroundExecutor.h"] | "Cancelled merging parts" is not an error. | I see this in logs on server shutdown:
```
2022.01.30 02:51:22.856463 [ 755777 ] {} <Error> void DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(DB::TaskRuntimeDataPtr) [Queue = DB::MergeMutateRuntimeQueue]: Code: 236. DB::Exception: Cancelled merging parts.
```
but it's not an error.
Log level should be changed and it should not show stack trace. | https://github.com/ClickHouse/ClickHouse/issues/34148 | https://github.com/ClickHouse/ClickHouse/pull/34232 | eeae89388b7747c299c12b8829ac11b8d3da682a | 47d538b52f7009806064b905256140d0e57f9cda | "2022-01-29T23:53:30Z" | c++ | "2022-02-02T07:29:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,135 | ["src/Parsers/ParserDictionary.cpp", "tests/queries/0_stateless/02188_parser_dictionary_primary_key.reference", "tests/queries/0_stateless/02188_parser_dictionary_primary_key.sql"] | Dictionaries complex primary key does not allow ( ) | `PRIMARY KEY (key1,key2,key3)` VS `PRIMARY KEY key1,key2,key3`
```
create database test;
use test;
create table dict_table ( key1 UInt64, key2 String, key3 String, S String)
Engine MergeTree order by (key1, key2, key3);
CREATE DICTIONARY item_dict ( key1 UInt64, key2 String, key3 String, S String )
PRIMARY KEY (key1,key2,key3)
SOURCE(CLICKHOUSE(TABLE dict_table DB 'test' USER 'default'))
LAYOUT(complex_key_direct());
Expected one of: list of elements, identifier
CREATE DICTIONARY item_dict ( key1 UInt64, key2 String, key3 String, S String )
PRIMARY KEY key1,key2,key3
SOURCE(CLICKHOUSE(TABLE dict_table DB 'test' USER 'default'))
LAYOUT(complex_key_direct());
Ok.
0 rows in set. Elapsed: 0.007 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/34135 | https://github.com/ClickHouse/ClickHouse/pull/34141 | 6319d47a7f909f38a31d50635fb8ca1a9dab8bb0 | a5fc8289ae4ccf11f8d82bbdec03042853d6d4d2 | "2022-01-29T02:05:51Z" | c++ | "2022-01-29T18:55:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,091 | ["src/QueryPipeline/RemoteInserter.cpp", "src/QueryPipeline/RemoteInserter.h", "src/Storages/Distributed/DirectoryMonitor.cpp", "tests/integration/test_distributed_insert_backward_compatibility/__init__.py", "tests/integration/test_distributed_insert_backward_compatibility/configs/remote_servers.xml", "tests/integration/test_distributed_insert_backward_compatibility/test.py"] | unknown serialization kind on distributed table |
**Describe the unexpected behaviour**
After upgraded to ClickHouse 22.1.3 revision 54455,when using distributed table to insert data,errors occured in log.
**How to reproduce**
Version:ClickHouse 22.1.3.7
**Expected behavior**
After upgraded to ClickHouse 22.1.3 revision 54455,when using distributed table to insert data,errors occured in log.
Cluster's ClickHouse version are the same,22.1.3.7.
**Error message and/or stacktrace**
2022.01.28 18:39:41.098174 [ 80700 ] {} <Error> XXX.YYYY.DirectoryMonitor: Code: 246. DB::Exception: Unknown serialization kind 99: While sending /clickhouse/store/4db/4db5491f-555c-4c9d-8db5-491f555cbc9d/shard7_replica1/1571210.bin. (CORRUPTED_DATA), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa82d07a in /usr/bin/clickhouse
1. DB::SerializationInfo::deserializeFromKindsBinary(DB::ReadBuffer&) @ 0x1326c09c in /usr/bin/clickhouse
2. DB::NativeReader::read() @ 0x14826428 in /usr/bin/clickhouse
3. ? @ 0x14195913 in /usr/bin/clickhouse
4. DB::StorageDistributedDirectoryMonitor::processFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1419391a in /usr/bin/clickhouse
5. DB::StorageDistributedDirectoryMonitor::run() @ 0x1418fa78 in /usr/bin/clickhouse
6. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x12f8470e in /usr/bin/clickhouse
7. DB::BackgroundSchedulePool::threadFunction() @ 0x12f870a7 in /usr/bin/clickhouse
8. ? @ 0x12f88170 in /usr/bin/clickhouse
9. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa86f4b7 in /usr/bin/clickhouse
10. ? @ 0xa872ebd in /usr/bin/clickhouse
11. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
12. __clone @ 0xfeb0d in /usr/lib64/libc-2.17.so
(version 22.1.3.7 (official build))
| https://github.com/ClickHouse/ClickHouse/issues/34091 | https://github.com/ClickHouse/ClickHouse/pull/34132 | 9f29c977debd3c99543fdb0a2d2561f9f1c4eafe | 822b58247a7edfb5c612ed3e32b04027920189d1 | "2022-01-28T10:45:12Z" | c++ | "2022-02-09T11:58:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,089 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "tests/integration/test_part_moves_between_shards/test.py"] | Data duplication in `PART MOVE TO SHARD` feature | I wrote a test for `PART MOVE TO SHARD` runs multiple times and received some unexpected behavior.
**Test description:**
1) Create a table
2) Make 2 rows insert on one shard and 1 row insert on some other
3) Stop Merges
4) Create Distributed table
5) Receive part UUID
6) Move this part multiple times
7) Make concurrent check on the number of rows
**How to reproduce**
1) CREATE TABLE IF NOT EXISTS {table_name} on CLUSTER {cluster_name}
(v UInt64) "
ENGINE = {table_engine}('/clickhouse/tables/replicated/{shard}/{table_name}', '{replica}')
ORDER BY tuple()
SETTINGS assign_part_uuids=1,
part_moves_between_shards_enable=1,
part_moves_between_shards_delay_seconds=2;
2) INSERT INTO {table_name} VALUES ({value}) - 2 rows on one shard and 1 row on some other
3) SYSTEM STOP MERGES {table_name} - on all nodes
4) CREATE TABLE IF NOT EXISTS {table_name_d} as {table_name}
ENGINE = Distributed({cluster_name}, currentDatabase(), {table_name})
5) SELECT uuid FROM system.parts where name = 'all_0_0_0'
6) SELECT name FROM system.parts where uuid = '{part_uuid}'
ALTER TABLE {table_name} MOVE PART name TO SHARD '/clickhouse/tables/replicated/{shard1}/{table_name}' - multiple times
7) select count() from {table_name_d}
* ClickHouse server version to use
22.1.2
* Queries to run that lead to an unexpected result
select count() from {table_name_d}
Result: 4
**Expected behavior**
Result: 3
**Additional information**
If make retries some time result becomes correct. The test gives the correct result if put part_moves_between_shards_delay_seconds=0.
| https://github.com/ClickHouse/ClickHouse/issues/34089 | https://github.com/ClickHouse/ClickHouse/pull/34385 | 065305ab65d2733fea463e863eeabb19be0b1c82 | 1df43a7f57abc0b117056000c8236545a7c55b2b | "2022-01-28T10:36:32Z" | c++ | "2022-02-21T16:53:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,073 | ["src/DataTypes/Serializations/ISerialization.cpp", "tests/queries/0_stateless/02191_nested_with_dots.reference", "tests/queries/0_stateless/02191_nested_with_dots.sql"] | Get Exception while query from Nested Array table in CH version 21.11.2.2 and upper | Clickhouse version 21.10.6.2,21.10.5.3 and lower:
```
CREATE TABLE repro (
`id` UInt64,
`version` UInt16,
`conversations.id` Array(UInt64),
`conversations.routings.id` Array(Array(UInt64))
)
ENGINE = ReplacingMergeTree(version)
ORDER BY (id)
SETTINGS min_bytes_for_wide_part = 0, min_rows_for_wide_part = 0;
INSERT INTO repro (`id`, `version`, `conversations.id`, `conversations.routings.id`)
VALUES (1, 1, [1], [[1]]);
select * from repro;
ββidββ¬βversionββ¬βconversations.idββ¬βconversations.routings.idββ
β 1 β 1 β [1] β [[1]] β
ββββββ΄ββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββββββββββ
```
CH version: 21.11.2.2 and upper:
```
CREATE TABLE repro (
`id` UInt64,
`version` UInt16,
`conversations.id` Array(UInt64),
`conversations.routings.id` Array(Array(UInt64))
)
ENGINE = ReplacingMergeTree(version)
ORDER BY (id)
SETTINGS min_bytes_for_wide_part = 0, min_rows_for_wide_part = 0;
INSERT INTO repro (`id`, `version`, `conversations.id`, `conversations.routings.id`)
VALUES (1, 1, [1], [[1]]);
select * from repro;
Received exception from server (version 21.11.2): <------------- unexpected, it should have results.
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: There is no subcolumn routings.id in type Nested(id UInt64, `routings.id` Array(UInt64)): While executing MergeTreeInOrder. (ILLEGAL_COLUMN)
```
**Unexpected Behaviour:**
It should not have an exception in the select query.
| https://github.com/ClickHouse/ClickHouse/issues/34073 | https://github.com/ClickHouse/ClickHouse/pull/34228 | ee67071201562706f257b416434de83afc89dd58 | b633a916afa86eb0669662c95afc0e52288ecdc8 | "2022-01-27T21:55:45Z" | c++ | "2022-02-07T16:01:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,044 | ["src/Functions/FunctionsTimeWindow.cpp", "src/Storages/WindowView/StorageWindowView.cpp", "src/Storages/WindowView/StorageWindowView.h"] | Windowed Materialized View using hop does not create overlapping windows. | Hello.
Given the following schema, I would expect a Materialized View with hop defined to "hop" the "window interval" every "hop interval". Presently it seems to just hop the window interval instead of hopping the hop interval.
You will see from my data below that from my windowed view, I am seeing only window_interval aligned results in the destination, and no results of hopping. It looks suspiciously like a tumble instead of a hop.
Given the time period of 00:16 to 00:18, with a window interval of 60 seconds, and a hop interval of 10s, am I correct to assume the proper entries would contain records for [00:16:00, 00:17:00], [00:16:10, 00:17:10], [00:16:20, 00:17:20], etc.
```sql
:) SELECT version()
SELECT version()
Query id: f474ec7a-af51-4980-b5b9-55cd68c05224
ββversion()ββ
β 22.1.3.7 β
βββββββββββββ
1 rows in set. Elapsed: 0.004 sec.
CREATE TABLE hop_data(
timestamp Timestamp,
val int
) Engine = TinyLog
CREATE TABLE hop_test(
hop_start DateTime,
hop_end DateTime,
sum_val int
) Engine = TinyLog
CREATE WINDOW VIEW hop_test_wv
TO hop_test
AS
SELECT
hopStart(w_id) AS hop_start,
hopEnd(w_id) AS hop_end,
SUM(val) AS sum_val
FROM
hop_data
GROUP BY hop(timestamp, INTERVAL 10 SECOND, INTERVAL 60 SECOND) AS w_id
SETTINGS allow_experimental_window_view = 1
-- Insert a few data points.
INSERT INTO hop_data VALUES (now(), 1);
:) SELECT * FROM hop_data;
SELECT *
FROM hop_data
Query id: e03f25fb-9b78-4f39-9cc2-81894ec74f7d
ββββββββββββtimestampββ¬βvalββ
β 2022-01-27 00:16:16 β 1 β
β 2022-01-27 00:16:21 β 1 β
β 2022-01-27 00:16:22 β 1 β
β 2022-01-27 00:16:25 β 1 β
β 2022-01-27 00:16:26 β 1 β
β 2022-01-27 00:16:28 β 1 β
β 2022-01-27 00:16:30 β 1 β
β 2022-01-27 00:16:31 β 1 β
β 2022-01-27 00:16:31 β 1 β
β 2022-01-27 00:17:11 β 1 β
β 2022-01-27 00:17:11 β 1 β
βββββββββββββββββββββββ΄ββββββ
11 rows in set. Elapsed: 0.006 sec.
:) SELECT * FROM hop_test;
SELECT *
FROM hop_test
Query id: e7d8c3a8-c467-419e-8d01-3706dff1d2e1
ββββββββββββhop_startββ¬βββββββββββββhop_endββ¬βsum_valββ
β 2022-01-27 00:16:00 β 2022-01-27 00:17:00 β 9 β
β 2022-01-27 00:17:00 β 2022-01-27 00:18:00 β 2 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/34044 | https://github.com/ClickHouse/ClickHouse/pull/36861 | 5877d0038882a94d83fce3e6ea1ea4388282f360 | 376e5564742537ac974e6471c1e437c5fd47ae6e | "2022-01-27T00:24:45Z" | c++ | "2022-05-10T07:25:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,014 | ["src/Interpreters/ProcessList.cpp", "src/Interpreters/ProcessList.h", "tests/queries/0_stateless/02190_current_metrics_query.reference", "tests/queries/0_stateless/02190_current_metrics_query.sql"] | system.metrics name='Query' and system.metric_log CurrentMetric_Query gauge doesn't calculate anymore | **Describe what's wrong**
Query metric doesn't calculate anymore
```sql
SELECT * FROM system.metrics WHERE metric='Query'
```
always return value=0
```sql
SELECT event_time, CurrentMetrics_Query FROM system.metric_log WHERE CurrentMetrics_Query>0
```
always return 0 rows
I check these queries during run following script in separate terminal
```bash
clickhouse-client -q "SELECT sleepEachRow(1),now() FROM numbers(60)"
```
**Does it reproduce on recent release?**
Yes it reproduce on 22.1 and 21.12
And doesn't reproduce on 21.11
| https://github.com/ClickHouse/ClickHouse/issues/34014 | https://github.com/ClickHouse/ClickHouse/pull/34224 | 56ac75a6e9e87f9f60daf6c8c4ed9f9e15ec9304 | 7304fd654a56bc04ab18a887f5c970ebf927c145 | "2022-01-26T12:38:46Z" | c++ | "2022-02-02T10:39:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 34,010 | ["src/Functions/array/arrayCompact.cpp", "tests/queries/0_stateless/01020_function_array_compact.sql", "tests/queries/0_stateless/01025_array_compact_generic.reference", "tests/queries/0_stateless/01025_array_compact_generic.sql"] | arrayCompact: higher-order ? |
**Describe the unexpected behaviour**
Current implementation of arrayCompact does not behave as other higher-order functions: it accepts lambda function as its first argument, but then it compacts array returned by lambda, not original array (see below for example).
**How to reproduce**
* Which ClickHouse server version to use
Any modern
* Queries to run that lead to unexpected result
select arrayCompact(x -> x.2 , groupArray(tuple(number, intDiv(number, 3) % 3))) from numbers(10);
**Expected behavior**
arrayCompact shall follow normal higher-order functons way and use results of lambda function ([0,0,0,1,1,1,2,2,2,0]) to compact original array ([(0,0),(1,0),(2,0),(3,1),(4,1),(5,1),(6,2),(7,2),(8,2),(9,0)]) into [(0,0),(3,1),(6,2),(9,0)]
**Additional context**
Use case: timeseries (time, value) with lots of repeated values that shall be compacted into (first time, value) when value repeats.
| https://github.com/ClickHouse/ClickHouse/issues/34010 | https://github.com/ClickHouse/ClickHouse/pull/34795 | aea7bfb59aa23432b7eb6f69c4ce158c40f65c11 | 7d01516202152c8d60d4fed6b72dad67357d337f | "2022-01-26T11:05:09Z" | c++ | "2022-03-03T18:25:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,982 | ["src/Functions/FunctionStartsEndsWith.h", "src/Functions/GatherUtils/Algorithms.h", "src/Functions/GatherUtils/GatherUtils.h", "src/Functions/GatherUtils/ends_with.cpp", "src/Functions/GatherUtils/starts_with.cpp", "tests/queries/0_stateless/02206_array_starts_ends_with.reference", "tests/queries/0_stateless/02206_array_starts_ends_with.sql"] | startsWith function for arrays | Support for arrays in `startsWith` (and, maybe, `endsWith`)
**Use case**
Would be quite useful for handling tree-like structured data where the node paths are specified as arrays of names or IDs.
**Describe the solution you'd like**
The desired behavior would be like this:
```
startsWith([1, 2, 3, 4], [1, 2]) -- true (1)
startsWith(['qwe', 'rty', 'ui', 'op'], ['qwe', 'rty']) -- true (1)
startsWith([1, 2, 3, 4], [2, 4]) -- false (0)
startsWith([1, 1, 2, 2], [1, 2]) -- false (0)
```
And similarly, for `endsWith`
**Describe alternatives you've considered**
Something like `arraySlice(first, 1, length(second)) = second`
But `startsWith` would be much more readable, harder to make a mistake in and, probably, can be better optimized (i.e. might not have to create new objects like `arraySlice` does) | https://github.com/ClickHouse/ClickHouse/issues/33982 | https://github.com/ClickHouse/ClickHouse/pull/34368 | e845a68eebdd2e85963267337d262f1d55e2f9ec | c09275f0dae654c954d5ea2ade98164d3490b940 | "2022-01-25T11:09:42Z" | c++ | "2022-02-18T05:34:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,973 | ["programs/keeper/Keeper.cpp", "programs/server/Server.cpp", "tests/integration/test_keeper_and_access_storage/__init__.py", "tests/integration/test_keeper_and_access_storage/configs/keeper.xml", "tests/integration/test_keeper_and_access_storage/test.py"] | ClickHouse server fails to start when both Keeper and user replication enabled | **Describe what's wrong**
ClickHouse server fails to start with the following config:
```
<keeper_server>
<tcp_port>2181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<operation_timeout_ms>5000</operation_timeout_ms>
<raft_logs_level>trace</raft_logs_level>
<session_timeout_ms>10000</session_timeout_ms>
</coordination_settings>
<raft_configuration>
<server>
<can_become_leader>true</can_become_leader>
<hostname>sas-n2pu34sjneaepg49.db.yandex.net</hostname>
<id>1</id>
<port>2888</port>
<priority>1</priority>
<start_as_follower>false</start_as_follower>
</server>
</raft_configuration>
</keeper_server>
<user_directories>
<users_xml>
<path>users.xml</path>
</users_xml>
<replicated>
<zookeeper_path>/clickhouse/access/</zookeeper_path>
</replicated>
</user_directories>
```
**Does it reproduce on recent release?**
Yes, it's reproducible on 22.1.3.7 and earlier versions (e.g. 21.11.10.1).
**Expected behavior**
ClickHouse server starts and works.
**Error message and/or stacktrace**
```
2022.01.25 10:58:06.320203 [ 47212 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2022.01.25 10:58:06.321356 [ 47212 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performing update on configuration
2022.01.25 10:58:06.323213 [ 47212 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performed update on configuration
2022.01.25 10:58:06.326661 [ 47212 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2022.01.25 10:58:06.327609 [ 47212 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2022.01.25 10:58:06.328643 [ 47212 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2022.01.25 10:58:06.328913 [ 47212 ] {} <Debug> Access(user directories): Added users.xml access storage 'users.xml', path: /etc/clickhouse-server/users.xml
2022.01.25 10:58:06.328935 [ 47212 ] {} <Debug> Access(user directories): Added replicated access storage 'replicated'
2022.01.25 10:58:06.330201 [ 47212 ] {} <Error> Application: Code: 999. Coordination::Exception: All connection tries failed while connecting to ZooKeeper. nodes: [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
Poco::Exception. Code: 1000, e.code() = 111, Connection refused (version 22.1.3.7 (official build)), [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
Poco::Exception. Code: 1000, e.code() = 111, Connection refused (version 22.1.3.7 (official build)), [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
Poco::Exception. Code: 1000, e.code() = 111, Connection refused (version 22.1.3.7 (official build)), [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
(Connection loss). (KEEPER_EXCEPTION), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa82d07a in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
1. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Coordination::Error, int) @ 0x14c1e7f5 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
2. Coordination::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Coordination::Error) @ 0x14c1eb36 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
3. Coordination::ZooKeeper::connect(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, Poco::Timespan) @ 0x14c61a2f in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
4. Coordination::ZooKeeper::ZooKeeper(std::__1::vector<Coordination::ZooKeeper::Node, std::__1::allocator<Coordination::ZooKeeper::Node> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Timespan, Poco::Timespan, Poco::Timespan, std::__1::shared_ptr<DB::ZooKeeperLog>) @ 0x14c5fedf in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
5. zkutil::ZooKeeper::init(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x14c21101 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
6. zkutil::ZooKeeper::ZooKeeper(Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::ZooKeeperLog>) @ 0x14c2380d in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
7. void std::__1::allocator<zkutil::ZooKeeper>::construct<zkutil::ZooKeeper, Poco::Util::AbstractConfiguration const&, char const (&) [10], std::__1::shared_ptr<DB::ZooKeeperLog> >(zkutil::ZooKeeper*, Poco::Util::AbstractConfiguration const&, char const (&) [10], std::__1::shared_ptr<DB::ZooKeeperLog>&&) @ 0x134c769b in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
8. DB::Context::getZooKeeper() const @ 0x134a6ab6 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
9. DB::ReplicatedAccessStorage::initializeZookeeper() @ 0x12f2be52 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
10. DB::ReplicatedAccessStorage::startup() @ 0x12f2bd0f in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
11. DB::AccessControl::addReplicatedStorage(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<std::__1::shared_ptr<zkutil::ZooKeeper> ()> const&) @ 0x12e57aea in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
12. DB::AccessControl::addStoragesFromUserDirectoriesConfig(Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<std::__1::shared_ptr<zkutil::ZooKeeper> ()> const&) @ 0x12e5a8a5 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
13. DB::AccessControl::addStoragesFromMainConfig(Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<std::__1::shared_ptr<zkutil::ZooKeeper> ()> const&) @ 0x12e5be61 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
14. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xa8b2ce9 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
15. Poco::Util::Application::run() @ 0x174746c6 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
16. DB::Server::run() @ 0xa8a8614 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
17. mainEntryClickHouseServer(int, char**) @ 0xa8a5bc7 in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
18. main @ 0xa82742a in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
19. __libc_start_main @ 0x21bf7 in /lib/x86_64-linux-gnu/libc-2.27.so
20. _start @ 0xa6ae3ee in /usr/lib/debug/.build-id/d1/1bc54a7fe20e44.debug
(version 22.1.3.7 (official build))
2022.01.25 10:58:06.331876 [ 47212 ] {} <Error> Application: Coordination::Exception: All connection tries failed while connecting to ZooKeeper. nodes: [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
Poco::Exception. Code: 1000, e.code() = 111, Connection refused (version 22.1.3.7 (official build)), [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
Poco::Exception. Code: 1000, e.code() = 111, Connection refused (version 22.1.3.7 (official build)), [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
Poco::Exception. Code: 1000, e.code() = 111, Connection refused (version 22.1.3.7 (official build)), [2a02:6b8:c23:168b:0:1589:e034:1c8a]:2181
(Connection loss)
2022.01.25 10:58:06.331966 [ 47212 ] {} <Information> Application: shutting down
2022.01.25 10:58:06.331972 [ 47212 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2022.01.25 10:58:06.332029 [ 47213 ] {} <Information> BaseDaemon: Stop SignalListener thread
2022.01.25 10:58:06.362269 [ 47211 ] {} <Information> Application: Child process exited normally with code 70.
```
| https://github.com/ClickHouse/ClickHouse/issues/33973 | https://github.com/ClickHouse/ClickHouse/pull/33988 | 0d6032f0fe361d157370eabee08ed24edbbe4494 | 0105f7e0bccb074904ed914222ddc53bb0beca0c | "2022-01-25T08:02:22Z" | c++ | "2022-01-25T19:54:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,964 | ["docker/test/stateless/Dockerfile", "docker/test/stateless/setup_minio.sh", "src/IO/WriteBufferFromS3.cpp", "tests/queries/0_stateless/02207_s3_content_type.reference", "tests/queries/0_stateless/02207_s3_content_type.sh"] | Wrong s3 object content type for large files | Connected to ClickHouse server version 21.13.1 revision 54455.
```
INSERT INTO FUNCTION s3('https://s3.us-east-1.amazonaws.com/dzhuravlev/small_file.xml.gz', 'XML',
'number UInt64', 'gz') SELECT * FROM numbers(10);
INSERT INTO FUNCTION s3('https://s3.us-east-1.amazonaws.com/dzhuravlev/large_file.xml.gz','XML',
'number UInt64','gz') SELECT * from numbers(1e8);
aws s3api head-object --bucket dzhuravlev --key small_file.xml.gz
{
"AcceptRanges": "bytes",
"LastModified": "Mon, 24 Jan 2022 21:28:50 GMT",
"ContentLength": 262,
"ETag": "\"667796620f85cd6bad6957ea01d29548\"",
"ContentType": "binary/octet-stream",
"Metadata": {}
}
aws s3api head-object --bucket dzhuravlev --key large_file.xml.gz
{
"AcceptRanges": "bytes",
"LastModified": "Mon, 24 Jan 2022 21:29:08 GMT",
"ContentLength": 261909920,
"ETag": "\"829dbc9697bcffd1528e8e6f55c564f4-8\"",
"ContentType": "application/xml", ---<<<<<<<<<<<-- expected "ContentType": "binary/octet-stream",
"Metadata": {}
}
```
So it seems that if the object size is bigger than 32MB then Clickhouse sets the wrong ContentType `application/xml` instead of `binary/octet-stream`.
| https://github.com/ClickHouse/ClickHouse/issues/33964 | https://github.com/ClickHouse/ClickHouse/pull/34433 | 7411178064218abc4eec790f98bad3b892439695 | 6df2c9c2d87a3cbbcb2cea68f1f2a8f419384530 | "2022-01-24T21:37:36Z" | c++ | "2022-02-17T10:11:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,945 | ["src/CMakeLists.txt", "src/Storages/StorageDistributed.cpp", "src/TableFunctions/CMakeLists.txt", "src/TableFunctions/TableFunctionView.cpp", "src/TableFunctions/TableFunctionView.h", "tests/queries/0_stateless/02225_parallel_distributed_insert_select_view.reference", "tests/queries/0_stateless/02225_parallel_distributed_insert_select_view.sh"] | parallel_distributed_insert_select + cluster('xxx',view()) | **Use case**
Run view on each shard individually and insert into local node.
**Describe the solution you'd like**
```sql
INSERT INTO FUNCTION cluster('all-sharded', map, output, key_2) SELECT *
FROM cluster('all-sharded', view(
SELECT *
FROM map.input
WHERE (key_1 % 2) = toUInt32(getMacro('all-sharded-shard'))
))
SETTINGS parallel_distributed_insert_select = 2
Query id: 0ef3707f-2b74-4367-bf6d-82939a64e0f2
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 21.11.10):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Both table name and UUID are empty. (UNKNOWN_TABLE)
``` | https://github.com/ClickHouse/ClickHouse/issues/33945 | https://github.com/ClickHouse/ClickHouse/pull/35132 | c364908061ff76d82fa7076e327ddc8a191a2712 | 6bfee7aca2bbe1cda17525b8aec38da5ffbbca00 | "2022-01-24T11:04:41Z" | c++ | "2022-03-09T08:10:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,944 | ["src/Functions/array/mapPopulateSeries.cpp", "tests/performance/map_populate_series.xml", "tests/queries/0_stateless/01318_map_populate_series.reference", "tests/queries/0_stateless/01318_map_populate_series.sql", "tests/queries/0_stateless/01925_map_populate_series_on_map.reference", "tests/queries/0_stateless/01925_map_populate_series_on_map.sql", "tests/queries/0_stateless/02205_map_populate_series_non_const.reference", "tests/queries/0_stateless/02205_map_populate_series_non_const.sql"] | mapPopulateSeries LOGICAL_ERROR | **Describe what's wrong**
**How to reproduce**
```sql
SELECT mapPopulateSeries(range(10), range(10), toUInt8(number))
FROM numbers(10)
LIMIT 1
Query id: 5eb2e3fb-26a0-4a17-8a37-9714d35095bd
ββmapPopulateSeries(range(10), range(10), toUInt8(number))ββ
β ([0],[0]) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββmapPopulateSeries(range(10), range(10), toUInt8(number))ββ
β ([0],[0]) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2 rows in set. Elapsed: 0.004 sec.
SELECT mapPopulateSeries(range(900), range(900), materialize(1000)).2
FROM numbers(100)
FORMAT `Null`
Query id: 6418bf79-5d29-4709-86d6-7bdee9b7c9f7
β Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.)
1 rows in set. Elapsed: 0.014 sec.
Received exception from server (version 21.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column Array(UInt16) position 0: expected 100, got 1. (LOGICAL_ERROR)
SELECT mapPopulateSeries(range(900), range(900), toUInt16(number)).2
FROM numbers(100)
LIMIT 1000
ββtupleElement(mapPopulateSeries(range(900), range(900), toUInt16(number)), 2)ββ
β [0] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.)
1 rows in set. Elapsed: 0.007 sec.
Received exception from server (version 21.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column Array(UInt16) position 0: expected 100, got 1. (LOGICAL_ERROR)
SELECT mapPopulateSeries(range(10), range(10), toUInt16(number))
FROM numbers(10)
LIMIT 1
Query id: 7b6d6e3d-19f1-489b-93a7-022eaa507af4
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.13.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Max key type in mapPopulateSeries should be same as keys type: While processing mapPopulateSeries(range(10), range(10), toUInt16(number)). (ILLEGAL_TYPE_OF_ARGUMENT)
```
**Expected behavior**
> A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
```
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column Array(UInt16) position 0: expected 100, got 1. (LOGICAL_ERROR)
``` | https://github.com/ClickHouse/ClickHouse/issues/33944 | https://github.com/ClickHouse/ClickHouse/pull/34318 | 47e2b3a35ae90fa860a3a16b211a049b28a4516b | eff16baaf369ca8ad94d1d4ab5517f7766c7eeb1 | "2022-01-24T10:59:47Z" | c++ | "2022-02-05T11:51:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,923 | ["src/Interpreters/InterpreterCreateFunctionQuery.cpp", "tests/queries/0_stateless/02181_sql_user_defined_functions_invalid_lambda.sql"] | std::out_of_range on CREATE FUNCTION | https://s3.amazonaws.com/clickhouse-test-reports/33889/3a252cd3a0966f5bf882e2b302999e123b9301d2/fuzzer_astfuzzerdebug,actions//report.html
```
2022.01.22 00:39:21.673579 [ 472 ] {} <Fatal> BaseDaemon: ########################################
2022.01.22 00:39:21.675012 [ 472 ] {} <Fatal> BaseDaemon: (version 22.2.1.3000, build id: 6D91CE62E3960D39) (from thread 234) (query_id: 36a47e41-72ec-41e4-8064-dcf017b3c51e) Received signal Aborted (6)
2022.01.22 00:39:21.675815 [ 472 ] {} <Fatal> BaseDaemon:
2022.01.22 00:39:21.676500 [ 472 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fdf417d218b 0x7fdf417b1859 0x24fe8d3b 0x24ff86a5 0x29207359 0x29207b68 0x2944b0b4 0x29447b9a 0x2944697c 0x7fdf41987609 0x7fdf418ae293
2022.01.22 00:39:21.677194 [ 472 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7fdf417d218b in ?
2022.01.22 00:39:21.677892 [ 472 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7fdf417b1859 in ?
2022.01.22 00:39:22.034545 [ 472 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x24fe8d3b in /workspace/clickhouse
2022.01.22 00:39:22.394230 [ 472 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1909: DB::TCPHandler::run() @ 0x24ff86a5 in /workspace/clickhouse
2022.01.22 00:39:22.464246 [ 472 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x29207359 in /workspace/clickhouse
2022.01.22 00:39:22.549113 [ 472 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x29207b68 in /workspace/clickhouse
2022.01.22 00:39:22.639127 [ 472 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x2944b0b4 in /workspace/clickhouse
2022.01.22 00:39:22.728469 [ 472 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x29447b9a in /workspace/clickhouse
2022.01.22 00:39:22.816303 [ 472 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x2944697c in /workspace/clickhouse
2022.01.22 00:39:22.816642 [ 472 ] {} <Fatal> BaseDaemon: 13. ? @ 0x7fdf41987609 in ?
2022.01.22 00:39:22.816963 [ 472 ] {} <Fatal> BaseDaemon: 14. clone @ 0x7fdf418ae293 in ?
2022.01.22 00:39:24.001222 [ 472 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 168AE79E3DB1C247FB5A72EB06D98BFF. There is no information about the reference checksum.
2022.01.22 00:39:25.834255 [ 473 ] {} <Fatal> BaseDaemon: ########################################
2022.01.22 00:39:25.834438 [ 473 ] {} <Fatal> BaseDaemon: (version 22.2.1.3000, build id: 6D91CE62E3960D39) (from thread 236) (query_id: 149206c3-071f-4ec2-9fb0-dfc8ed0e5d29) Received signal Aborted (6)
2022.01.22 00:39:25.834608 [ 473 ] {} <Fatal> BaseDaemon:
2022.01.22 00:39:25.834763 [ 473 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fdf417d218b 0x7fdf417b1859 0x24fe8d3b 0x24ff86a5 0x29207359 0x29207b68 0x2944b0b4 0x29447b9a 0x2944697c 0x7fdf41987609 0x7fdf418ae293
2022.01.22 00:39:25.834895 [ 473 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7fdf417d218b in ?
2022.01.22 00:39:25.835035 [ 473 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7fdf417b1859 in ?
2022.01.22 00:39:26.160350 [ 473 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x24fe8d3b in /workspace/clickhouse
2022.01.22 00:39:26.518700 [ 473 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1909: DB::TCPHandler::run() @ 0x24ff86a5 in /workspace/clickhouse
2022.01.22 00:39:26.590074 [ 473 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x29207359 in /workspace/clickhouse
2022.01.22 00:39:26.674713 [ 473 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x29207b68 in /workspace/clickhouse
2022.01.22 00:39:26.764319 [ 473 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x2944b0b4 in /workspace/clickhouse
2022.01.22 00:39:26.853097 [ 473 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x29447b9a in /workspace/clickhouse
2022.01.22 00:39:26.940903 [ 473 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x2944697c in /workspace/clickhouse
2022.01.22 00:39:26.941206 [ 473 ] {} <Fatal> BaseDaemon: 13. ? @ 0x7fdf41987609 in ?
2022.01.22 00:39:26.941477 [ 473 ] {} <Fatal> BaseDaemon: 14. clone @ 0x7fdf418ae293 in ?
2022.01.22 00:39:28.123745 [ 473 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 168AE79E3DB1C247FB5A72EB06D98BFF. There is no information about the reference checksum.
```
```
2022.01.22 00:39:13.611975 [ 234 ] {36a47e41-72ec-41e4-8064-dcf017b3c51e} <Debug> executeQuery: (from [::ffff:127.0.0.1]:40278) CREATE FUNCTION `02181_invalid_lambda` AS lambda(plus(x_doubled))
2022.01.22 00:39:13.612150 [ 234 ] {36a47e41-72ec-41e4-8064-dcf017b3c51e} <Trace> ContextAccess (default): Access granted: CREATE FUNCTION ON *.*
2022.01.22 00:39:14.409588 [ 234 ] {36a47e41-72ec-41e4-8064-dcf017b3c51e} <Error> executeQuery: std::exception. Code: 1001, type: std::out_of_range, e.what() = vector (version 22.2.1.3000) (from [::ffff:127.0.0.1]:40278) (in query: CREATE FUNCTION `02181_invalid_lambda` AS lambda(plus(x_doubled))), Stack trace (when copying this message, always include the lines below):
2022.01.22 00:39:21.675012 [ 472 ] {} <Fatal> BaseDaemon: (version 22.2.1.3000, build id: 6D91CE62E3960D39) (from thread 234) (query_id: 36a47e41-72ec-41e4-8064-dcf017b3c51e) Received signal Aborted (6)
```
```
2022.01.22 00:39:25.832595 [ 236 ] {149206c3-071f-4ec2-9fb0-dfc8ed0e5d29} <Debug> executeQuery: (from [::ffff:127.0.0.1]:40280) CREATE FUNCTION `02181_invalid_lambda` AS lambda(lambda(x))
2022.01.22 00:39:25.832794 [ 236 ] {149206c3-071f-4ec2-9fb0-dfc8ed0e5d29} <Trace> ContextAccess (default): Access granted: CREATE FUNCTION ON *.*
2022.01.22 00:39:25.833269 [ 236 ] {149206c3-071f-4ec2-9fb0-dfc8ed0e5d29} <Error> executeQuery: std::exception. Code: 1001, type: std::out_of_range, e.what() = vector (version 22.2.1.3000) (from [::ffff:127.0.0.1]:40280) (in query: CREATE FUNCTION `02181_invalid_lambda` AS lambda(lambda(x))), Stack trace (when copying this message, always include the lines below):
2022.01.22 00:39:25.834438 [ 473 ] {} <Fatal> BaseDaemon: (version 22.2.1.3000, build id: 6D91CE62E3960D39) (from thread 236) (query_id: 149206c3-071f-4ec2-9fb0-dfc8ed0e5d29) Received signal Aborted (6)
``` | https://github.com/ClickHouse/ClickHouse/issues/33923 | https://github.com/ClickHouse/ClickHouse/pull/33924 | 57dae1e3084b76a8a12b2a705f2088fdd50389dd | 7fffe5846f07882c41a25175eaea159fb2309eb4 | "2022-01-23T13:56:47Z" | c++ | "2022-01-23T21:06:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,906 | ["src/Interpreters/convertFieldToType.cpp", "tests/queries/0_stateless/02286_convert_decimal_type.reference", "tests/queries/0_stateless/02286_convert_decimal_type.sql"] | Integer from VALUES cannot be parsed as Decimal256 | `SELECT * FROM values('x Decimal256(0)', (1))`
```
Code: 53. DB::Exception: Type mismatch in IN or VALUES section. Expected: Decimal(76, 0). Got: UInt64. (TYPE_MISMATCH)
``` | https://github.com/ClickHouse/ClickHouse/issues/33906 | https://github.com/ClickHouse/ClickHouse/pull/36489 | 0fa63a8d65937b39f3f4db5d133b2090dfd89f12 | 2106a7b895e54071d3490c2acd020b62b57748a4 | "2022-01-22T09:31:13Z" | c++ | "2022-04-25T11:23:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,838 | ["src/DataTypes/DataTypeTuple.cpp", "src/DataTypes/Serializations/SerializationNamed.h", "tests/queries/0_stateless/02184_nested_tuple.reference", "tests/queries/0_stateless/02184_nested_tuple.sql"] | Null pointer exception - server crash selecting Tuple | **Describe the unexpected behaviour**
Have a table with the following column definition:
```
endUserIDs Tuple(
_experience Tuple(
aaid Tuple(
id Nullable(String),
namespace Tuple(
code LowCardinality(Nullable(String))
),
primary LowCardinality(Nullable(UInt8))
),
mcid Tuple(
id Nullable(String),
namespace Tuple(
code LowCardinality(Nullable(String))
),
primary LowCardinality(Nullable(UInt8))
)
)
)
```
When I select a tuple element `endUserIDs._experience`, server crashes with null pointer exception:
**How to reproduce**
* 22.1.2.2
* clickhouse-client
* select endUserIDs._experience from table limit 10;
**Expected behavior**
Query should succeed (This was working on version 21.10)
**Error message and/or stacktrace**
```
2022.01.20 15:24:47.705230 [ 2793 ] {} <Fatal> BaseDaemon: ########################################
2022.01.20 15:24:47.705370 [ 2793 ] {} <Fatal> BaseDaemon: (version 22.1.2.2 (official build), build id: D4467B3558D29571) (from thread 2577) (query_id: 0e011ee7-27f7-4053-89ce-d2fe0e678170) Received signal Segmentation fault (11)
2022.01.20 15:24:47.705406 [ 2793 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2022.01.20 15:24:47.705439 [ 2793 ] {} <Fatal> BaseDaemon: Stack trace: 0x13245f59 0x1323a5f9 0x143a2fd3 0x14b1dd2e 0x14b226e0 0x14b20ed5 0x14b16da8 0x14b17ead 0x14b16620 0x1483f655 0x1483f21a 0x14a8a802 0x1485ecc3 0x1485353e 0x14852389 0x14852098 0x148627e7 0xa86e077 0xa871a7d 0x7f74452ac609 0x7f74451d3293
2022.01.20 15:24:47.705512 [ 2793 ] {} <Fatal> BaseDaemon: 2. DB::IDataType::createColumn(DB::ISerialization const&) const @ 0x13245f59 in /usr/bin/clickhouse
2022.01.20 15:24:47.705547 [ 2793 ] {} <Fatal> BaseDaemon: 3. DB::DataTypeTuple::createColumn(DB::ISerialization const&) const @ 0x1323a5f9 in /usr/bin/clickhouse
2022.01.20 15:24:47.705579 [ 2793 ] {} <Fatal> BaseDaemon: 4. DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x143a2fd3 in /usr/bin/clickhouse
2022.01.20 15:24:47.705614 [ 2793 ] {} <Fatal> BaseDaemon: 5. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x14b1dd2e in /usr/bin/clickhouse
2022.01.20 15:24:47.705674 [ 2793 ] {} <Fatal> BaseDaemon: 6. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x14b226e0 in /usr/bin/clickhouse
2022.01.20 15:24:47.705713 [ 2793 ] {} <Fatal> BaseDaemon: 7. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x14b20ed5 in /usr/bin/clickhouse
2022.01.20 15:24:47.705743 [ 2793 ] {} <Fatal> BaseDaemon: 8. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x14b16da8 in /usr/bin/clickhouse
2022.01.20 15:24:47.705765 [ 2793 ] {} <Fatal> BaseDaemon: 9. DB::MergeTreeBaseSelectProcessor::readFromPart() @ 0x14b17ead in /usr/bin/clickhouse
2022.01.20 15:24:47.705885 [ 2793 ] {} <Fatal> BaseDaemon: 10. DB::MergeTreeBaseSelectProcessor::generate() @ 0x14b16620 in /usr/bin/clickhouse
2022.01.20 15:24:47.705946 [ 2793 ] {} <Fatal> BaseDaemon: 11. DB::ISource::tryGenerate() @ 0x1483f655 in /usr/bin/clickhouse
2022.01.20 15:24:47.705981 [ 2793 ] {} <Fatal> BaseDaemon: 12. DB::ISource::work() @ 0x1483f21a in /usr/bin/clickhouse
2022.01.20 15:24:47.706010 [ 2793 ] {} <Fatal> BaseDaemon: 13. DB::SourceWithProgress::work() @ 0x14a8a802 in /usr/bin/clickhouse
2022.01.20 15:24:47.706033 [ 2793 ] {} <Fatal> BaseDaemon: 14. DB::ExecutionThreadContext::executeTask() @ 0x1485ecc3 in /usr/bin/clickhouse
2022.01.20 15:24:47.706063 [ 2793 ] {} <Fatal> BaseDaemon: 15. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1485353e in /usr/bin/clickhouse
2022.01.20 15:24:47.706109 [ 2793 ] {} <Fatal> BaseDaemon: 16. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x14852389 in /usr/bin/clickhouse
2022.01.20 15:24:47.706142 [ 2793 ] {} <Fatal> BaseDaemon: 17. DB::PipelineExecutor::execute(unsigned long) @ 0x14852098 in /usr/bin/clickhouse
2022.01.20 15:24:47.706172 [ 2793 ] {} <Fatal> BaseDaemon: 18. ? @ 0x148627e7 in /usr/bin/clickhouse
2022.01.20 15:24:47.706220 [ 2793 ] {} <Fatal> BaseDaemon: 19. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa86e077 in /usr/bin/clickhouse
2022.01.20 15:24:47.706268 [ 2793 ] {} <Fatal> BaseDaemon: 20. ? @ 0xa871a7d in /usr/bin/clickhouse
2022.01.20 15:24:47.706302 [ 2793 ] {} <Fatal> BaseDaemon: 21. ? @ 0x7f74452ac609 in ?
2022.01.20 15:24:47.706344 [ 2793 ] {} <Fatal> BaseDaemon: 22. __clone @ 0x7f74451d3293 in ?
2022.01.20 15:24:47.841121 [ 2793 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 1DADB85040C1C3668E7B676D9BDFA079. There is no information about the reference checksum.
```
**Additional context**
Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/33838 | https://github.com/ClickHouse/ClickHouse/pull/33956 | 2c85373966055351fdafe4b52440ea551844031d | cd2305eb57d02a0ef17a2ce0f39d905b83d8a080 | "2022-01-20T15:37:37Z" | c++ | "2022-01-27T19:58:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,834 | ["tests/queries/0_stateless/02181_dictionary_attach_detach.reference", "tests/queries/0_stateless/02181_dictionary_attach_detach.sql"] | dictionary table detached earlier, could not attach it now. | i tried to drop the dictionary table and detached the table. and now i could not attach it. i have shard the image of outputs.

| https://github.com/ClickHouse/ClickHouse/issues/33834 | https://github.com/ClickHouse/ClickHouse/pull/33870 | e9c46b51f1b510f8f41b9119df75791caa2ab9b4 | 1b5a67517bd520a0bbd2e39e51de71e32798e747 | "2022-01-20T14:09:22Z" | c++ | "2022-01-21T19:17:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,820 | ["src/Dictionaries/CacheDictionary.cpp", "src/Dictionaries/FlatDictionary.cpp", "src/Dictionaries/FlatDictionary.h", "src/Dictionaries/HashedArrayDictionary.cpp", "src/Dictionaries/HashedDictionary.cpp", "src/Dictionaries/HashedDictionary.h", "src/Dictionaries/RangeHashedDictionary.cpp", "src/Dictionaries/RangeHashedDictionary.h", "tests/queries/0_stateless/02183_dictionary_no_attributes.reference", "tests/queries/0_stateless/02183_dictionary_no_attributes.sql"] | Server crash when accessing a dictionary without attributes | When creating a dictionary without attributes in the database with the Ordinary engine and querying the dictionary, the clickhouse server crashes
```
SELECT version()
ββversion()βββ
β 21.12.3.32 β
ββββββββββββββ
CREATE DATABASE test ENGINE = Ordinary;
CREATE TABLE test.dict_src_test
(
`StartDate` Date,
`EndDate` Date,
`ColumnString` String,
`Columnone` String,
`Columntwo` String,
`Columnthree` String,
`Value` Int64,
`Columnsix` String
)
ENGINE = ReplacingMergeTree()
PARTITION BY toYear(StartDate)
PRIMARY KEY StartDate
ORDER BY (StartDate,
EndDate,
ColumnString,
Columnone,
Columntwo,
Columnthree,
Value,
Columnsix)
SETTINGS index_granularity = 8192;
CREATE DICTIONARY test.dict_test
(
`ColumnString` String,
`Columnone` String,
`Columntwo` String,
`Value` UInt64,
`Columnsix` String,
`StartDate` DATE,
`EndDate` DATE
)
PRIMARY KEY ColumnString,
Columnone,
Columntwo,
Value,
Columnsix
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' PASSWORD '' DB 'test' TABLE 'dict_src_test'))
LIFETIME(MIN 82800 MAX 86400)
LAYOUT(COMPLEX_KEY_RANGE_HASHED())
RANGE(MIN StartDate MAX EndDate);
INSERT INTO test.dict_src_test (ColumnString) VALUES ('Three');
INSERT INTO test.dict_src_test (Columnone) VALUES ('Apple');
INSERT INTO test.dict_src_test (Columntwo) VALUES ('333333');
INSERT INTO test.dict_src_test (Columnthree) VALUES ('Pen');
INSERT INTO test.dict_src_test (Value) VALUES ('6666');
INSERT INTO test.dict_src_test (Columnsix) VALUES ('Time');
select * from test.dict_test;
2022.01.19 13:32:10.458512 [ 814 ] <Fatal> BaseDaemon: ########################################
2022.01.19 13:32:10.458562 [ 814 ] <Fatal> BaseDaemon: (version 21.12.3.32 (official build), build id: FA4A7F489F3FF6E3) (from thread 589) (query_id: 4022beaa-4bd1-4e0d-a921-63ebf9fa12ad) Received signal Segmentation fault (11)
2022.01.19 13:32:10.458582 [ 814 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2022.01.19 13:32:10.458605 [ 814 ] <Fatal> BaseDaemon: Stack trace: 0x10f0ce58 0x10f03822 0x10ea6fb3 0x136aee8c 0x13686412 0x1329076e 0x132888d4 0x132883b6 0x132d7ea5 0x132d8fb6 0x135289f0 0x135267d5 0x13fd8c51 0x13fec4b9 0x16f3d6af 0x16f3fb01 0x1704e889 0x1704bf80 0x7f91cc9f6609 0x7f91cc912293
2022.01.19 13:32:10.458663 [ 814 ] <Fatal> BaseDaemon: 2. bool DB::RangeHashedDictionary<(DB::DictionaryKeyType)1>::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, unsigned long, unsigned long) const::'lambda'(auto const&)::operator()<DB::TypePair<DB::DataTypeDate, void> >(auto const&) @ 0x10f0ce58 in /usr/bin/clickhouse
2022.01.19 13:32:10.458702 [ 814 ] <Fatal> BaseDaemon: 3. bool DB::callOnIndexAndDataType<void, DB::RangeHashedDictionary<(DB::DictionaryKeyType)1>::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, unsigned long, unsigned long) const::'lambda'(auto const&)&>(DB::TypeIndex, DB::RangeHashedDictionary<(DB::DictionaryKeyType)1>::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, unsigned long, unsigned long) const::'lambda'(auto const&)&) @ 0x10f03822 in /usr/bin/clickhouse
2022.01.19 13:32:10.458732 [ 814 ] <Fatal> BaseDaemon: 4. DB::RangeHashedDictionary<(DB::DictionaryKeyType)1>::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, unsigned long, unsigned long) const @ 0x10ea6fb3 in /usr/bin/clickhouse
2022.01.19 13:32:10.458765 [ 814 ] <Fatal> BaseDaemon: 5. DB::StorageDictionary::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x136aee8c in /usr/bin/clickhouse
2022.01.19 13:32:10.458797 [ 814 ] <Fatal> BaseDaemon: 6. DB::IStorage::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x13686412 in /usr/bin/clickhouse
2022.01.19 13:32:10.458835 [ 814 ] <Fatal> BaseDaemon: 7. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x1329076e in /usr/bin/clickhouse
2022.01.19 13:32:10.458876 [ 814 ] <Fatal> BaseDaemon: 8. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) @ 0x132888d4 in /usr/bin/clickhouse
2022.01.19 13:32:10.458898 [ 814 ] <Fatal> BaseDaemon: 9. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x132883b6 in /usr/bin/clickhouse
2022.01.19 13:32:10.458924 [ 814 ] <Fatal> BaseDaemon: 10. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x132d7ea5 in /usr/bin/clickhouse
2022.01.19 13:32:10.458949 [ 814 ] <Fatal> BaseDaemon: 11. DB::InterpreterSelectWithUnionQuery::execute() @ 0x132d8fb6 in /usr/bin/clickhouse
2022.01.19 13:32:10.458974 [ 814 ] <Fatal> BaseDaemon: 12. ? @ 0x135289f0 in /usr/bin/clickhouse
2022.01.19 13:32:10.459001 [ 814 ] <Fatal> BaseDaemon: 13. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x135267d5 in /usr/bin/clickhouse
2022.01.19 13:32:10.459023 [ 814 ] <Fatal> BaseDaemon: 14. DB::TCPHandler::runImpl() @ 0x13fd8c51 in /usr/bin/clickhouse
2022.01.19 13:32:10.459048 [ 814 ] <Fatal> BaseDaemon: 15. DB::TCPHandler::run() @ 0x13fec4b9 in /usr/bin/clickhouse
2022.01.19 13:32:10.459079 [ 814 ] <Fatal> BaseDaemon: 16. Poco::Net::TCPServerConnection::start() @ 0x16f3d6af in /usr/bin/clickhouse
2022.01.19 13:32:10.459107 [ 814 ] <Fatal> BaseDaemon: 17. Poco::Net::TCPServerDispatcher::run() @ 0x16f3fb01 in /usr/bin/clickhouse
2022.01.19 13:32:10.459141 [ 814 ] <Fatal> BaseDaemon: 18. Poco::PooledThread::run() @ 0x1704e889 in /usr/bin/clickhouse
2022.01.19 13:32:10.459164 [ 814 ] <Fatal> BaseDaemon: 19. Poco::ThreadImpl::runnableEntry(void*) @ 0x1704bf80 in /usr/bin/clickhouse
2022.01.19 13:32:10.459187 [ 814 ] <Fatal> BaseDaemon: 20. ? @ 0x7f91cc9f6609 in ?
2022.01.19 13:32:10.459207 [ 814 ] <Fatal> BaseDaemon: 21. clone @ 0x7f91cc912293 in ?
2022.01.19 13:32:10.561139 [ 814 ] <Fatal> BaseDaemon: Calculated checksum of the binary: 5BEBF5792A40F7E345921EDA3698245B.
``` | https://github.com/ClickHouse/ClickHouse/issues/33820 | https://github.com/ClickHouse/ClickHouse/pull/33918 | edc8568b2f186b5a77114619a0898349cede2e92 | 2d84d5f74d4f932999ad2900f2a2c219b3995b81 | "2022-01-20T08:32:42Z" | c++ | "2022-01-22T23:56:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,806 | ["src/Interpreters/InterpreterSystemQuery.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.h"] | RESTORE REPLICA fails with "It's a bug: replica is not readonly" | Under some circumstances, an attempt to `RESTORE REPLICA` will fail with the log message `It's a bug: replica is not readonly` even though the table shows as `read_only` in the `system.replicas` table.
**Expected behavior**
The expected behavior is that RESTORE REPLICA will either execute as expected if the table is `read_only` in the system.replicas table, or the fail with the correct error/log message, such as `Replica path is present at {} -- nothing to restore. `
**Additional context**
There are two ways for this scenario to occur:
(1) The mostly likely path:
The ClickHouse server starts, but zookeeper is not available, so the StorageReplicatedMergeTree field `is_read_only` is set`true` for this table. However, the SRMT field `has_metadata_in_zookeeper` keeps its default value of `true`.
Zookeeper subsequently becomes available, but the expected zookeeper path does not exist, so the RESTORE REPLICA command appears to be valid, and passes the checks in function `InterpreterSystemQuery::restoreReplica()`
Because `has_metadata_in_zookeeper` retains its initial `true` value, this final check in `StorageReplicatedMergeTree::restoreMetadataInZooKeeper()` fails:
```
if (!is_readonly || has_metadata_in_zookeeper)
throw Exception(ErrorCodes::LOGICAL_ERROR, "It's a bug: replica is not readonly");
```
(2) Another path where the user is trying clearing zookeeper for some reason
ClickHouse connects to zookeeper, and finds some data about the replica table in the zookeeper path. In that case the StorageReplicatedMergeTree member variable `has_metadata_in_zookeeper` will keep its default value of `true`, and the SRMT member variable `is_read_only` will keep its default value of `false`
The user, for whatever reason, manually deletes the replica path from zookeeper.
Perhaps during the previous operation, ClickHouse loses its zookeeper connection. The `ReplicatedMergeTreeRestartingThread ` discovers that zookeeper is down, and set the `is_read_only` flag to true. However, again, `has_metadata_in_zookeeper` remains true.
In both cases the zookeeper path is indeed empty and the table is correctly marked `read_only`, but the `has_metadata_in_zookeeper` field has the incorrect `true` value.
The simplest workaround is to restart ClickHouse and `has_metadata_in_zookeeper` will be correctly set to false when attempting to initially attach the table. However, this value could also be set correctly in the process of executing the RESTORE REPLICA query.
The other potential issue in this area is that the `InterpreterSystemQuery::restoreReplica()` function checks the system/context zookeeper and could return incorrect results if the replicated table is using an alternative zookeeper. | https://github.com/ClickHouse/ClickHouse/issues/33806 | https://github.com/ClickHouse/ClickHouse/pull/33847 | caa9cc499293cdedfefe70b67b9d50b0dc0591e7 | da9a38655b3f99a5d5e78b4008d7f65905e77a10 | "2022-01-20T01:04:45Z" | c++ | "2022-01-24T10:28:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,798 | ["src/Interpreters/RewriteFunctionToSubcolumnVisitor.cpp", "tests/queries/0_stateless/02232_functions_to_subcolumns_alias.reference", "tests/queries/0_stateless/02232_functions_to_subcolumns_alias.sql"] | optimize_functions_to_subcolumns count(null_column) doesn't use alias for column name | **Describe what's wrong**
In case of using `optimize_functions_to_subcolumns` ClickHouse doesn't use alias for column name.
**How to reproduce**
ClickHouse 21.13
```
CREATE TABLE test
(
`key` UInt32,
`value` Nullable(UInt32)
)
ENGINE = MergeTree
ORDER BY key
INSERT INTO test SELECT
number,
NULL
FROM numbers(10000000);
SELECT count(value) AS a
FROM test
SETTINGS optimize_functions_to_subcolumns = 0
ββaββ
β 0 β
βββββ
1 rows in set. Elapsed: 0.028 sec. Processed 32.50 million rows, 162.52 MB (1.15 billion rows/s., 5.74 GB/s.)
SELECT count(value) AS a
FROM test
SETTINGS optimize_functions_to_subcolumns = 1
ββsum(not(value.null))ββ
β 0 β
ββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.032 sec. Processed 32.50 million rows, 32.50 MB (1.00 billion rows/s., 1.00 GB/s.)
```
**Expected behavior**
ClickHouse should use alias name.
| https://github.com/ClickHouse/ClickHouse/issues/33798 | https://github.com/ClickHouse/ClickHouse/pull/35079 | 733789da1fea0e23e1f129194882d1f746301694 | 5f8900cee6ebbbf409eccc9c729d1819ddb659b0 | "2022-01-19T17:50:30Z" | c++ | "2022-03-11T10:55:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,768 | ["src/Interpreters/SystemLog.cpp", "tests/integration/test_system_logs_comment/__init__.py", "tests/integration/test_system_logs_comment/test.py"] | clickhouse server can not start up when set comment for query_log |
**Describe the unexpected behaviour**
clickhouse server(22.1.1.2542) can not start up when set comment for query_log
**How to reproduce**
add comment for query_log in config.xml
```
<query_log>
<database>system</database>
<table>query_log</table>
<engine>ENGINE = MergeTree PARTITION BY (event_date)
ORDER BY (event_time)
TTL event_date + INTERVAL 14 DAY DELETE
SETTINGS ttl_only_drop_parts=1
COMMENT 'xxx'
</engine>
</query_log>
```
restart clickhouse-server:
2022.01.18 23:07:04.861218 [ 3415373 ] {} <Error> Application: Caught exception while loading metadata: Code: 62. DB::Exception: Syntax error (Storage to create table for query_log): failed at position 259 ('COMMENT') (line 5, col 31): COMMENT 'xxx'
. Expected end of query. (SYNTAX_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa82bc3a in /usr/bin/clickhouse
1. DB::parseQueryAndMovePosition(DB::IParser&, char const*&, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, unsigned long) @ 0x14d6169f in /usr/bin/clickhouse
2. DB::SystemLogs::SystemLogs(std::__1::shared_ptr<DB::Context const>, Poco::Util::AbstractConfiguration const&) @ 0x13b996bb in /usr/bin/clickhouse
3. DB::Context::initializeSystemLogs() @ 0x134ac6af in /usr/bin/clickhouse
4. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xa8b4b9d in /usr/bin/clickhouse
5. Poco::Util::Application::run() @ 0x17471aa6 in /usr/bin/clickhouse
6. DB::Server::run() @ 0xa8a71d1 in /usr/bin/clickhouse
7. mainEntryClickHouseServer(int, char**) @ 0xa8a4787 in /usr/bin/clickhouse
8. main @ 0xa825fea in /usr/bin/clickhouse
9. __libc_start_main @ 0x7f3e9d62f0b3 in ?
10. _start @ 0xa6acfae in /usr/bin/clickhouse
(version 22.1.1.2542)
**Expected behavior**
Clickhouse server should be up when set comment for query_log.
21.8 is ok.
| https://github.com/ClickHouse/ClickHouse/issues/33768 | https://github.com/ClickHouse/ClickHouse/pull/34536 | 944111183330f3e5941ce3760bca6473ff0e14c5 | da235f9cda6eca4081658032ba62fc4c8aced11a | "2022-01-19T07:16:32Z" | c++ | "2022-03-23T14:06:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,746 | ["src/Dictionaries/ExternalQueryBuilder.cpp", "tests/integration/test_dictionaries_update_field/test.py"] | {condition} placeholder does not work for external dictionaries with custom query and `update_field` | Use of custom query/placeholder for external dictionaries.
The `placeholder` is similar to `update_field`
Current implementation supports only simple table field like
`SOURCE(CLICKHOUSE(... update_field 'added_time' update_lag 15))`
but miss support for complex/compound expressions (that for example could use index more efficiently).
Example:
query = "
....
from t1
where :placeholder between col1 and col2 and col3=42
"
invalidate_query=
placeholder_query = "select max(col1) from t1 ...",
placeholder_init_str = "toDateTime('1970-01-01')"
So initial value of placeholder is calculated from `placeholder_init_str`,
next value from `invalidate_query`
and could be used in query.
By introducing this feature work with dictionary sources will be more flexible and general. | https://github.com/ClickHouse/ClickHouse/issues/33746 | https://github.com/ClickHouse/ClickHouse/pull/37947 | 3b344a3d263d367b0420912f92584e9c8afe6a2a | aa5293da5df10f0703ef04912158bf7e087f658f | "2022-01-18T17:54:16Z" | c++ | "2022-06-09T14:31:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,734 | ["tests/queries/0_stateless/02154_dictionary_get_http_json.reference", "tests/queries/0_stateless/02154_dictionary_get_http_json.sh"] | Http query get dictionary in JSON format | **Describe what's wrong**
Clickhouse server crashes when sending http request for dictionary with json return format.
**Does it reproduce on recent release?**
version: 21.11.5.33
In version: 21.10.5.3 the problem is not reproducing.
**How to reproduce**
```
CREATE DICTIONARY geo_city_en
(
`geoname_id` UInt64,
`locale_code` String DEFAULT '',
`continent_code` String DEFAULT '',
`continent_name` String DEFAULT '',
`country_iso_code` String DEFAULT '',
`country_name` String DEFAULT '',
`subdivision_1_iso_code` String DEFAULT '',
`subdivision_1_name` String DEFAULT '',
`subdivision_2_iso_code` String DEFAULT '',
`subdivision_2_name` String DEFAULT '',
`city_name` String DEFAULT '',
`metro_code` UInt32 DEFAULT 0,
`time_zone` String DEFAULT '',
`is_in_european_union` UInt8 DEFAULT 0
)
PRIMARY KEY geoname_id
SOURCE(HTTP(URL 'http://s3_proxy.stage_octo:8182/dicts/GeoIP2-City-Locations-en.csv' FORMAT 'CSVWithNames'))
LIFETIME(MIN 0 MAX 300)
LAYOUT(HASHED())
SETTINGS(format_csv_allow_single_quotes = 0)
```
Send http request to get dictionary:
```
http://127.0.0.1:8123/?wait_end_of_query=1&database=default&user=default&password=&wait_end_of_query=1&query=select dictGetString('geo_city_en','country_name', toUInt64(2997838)) as Country_DictName FORMAT JSON
```
**Expected behavior**
Return query result in JSON format.
**Error message and/or stacktrace**
```
2022.01.18 12:20:49.613177 [ 93 ] {58b0300a-75f1-4455-bfbd-65d10bcd17f9} <Debug> executeQuery: (from 10.0.72.3:34832) select dictGetString('dict.geo_city_en','country_name', toUInt64(2997838)) as Country_DictName FORMAT JSON
2022.01.18 12:20:49.613654 [ 93 ] {58b0300a-75f1-4455-bfbd-65d10bcd17f9} <Information> executeQuery: Read 1 rows, 1.00 B in 0.000460057 sec., 2173 rows/sec., 2.12 KiB/sec.
2022.01.18 12:20:49.614070 [ 253 ] {} <Fatal> BaseDaemon: ########################################
2022.01.18 12:20:49.614090 [ 253 ] {} <Fatal> BaseDaemon: (version 21.11.5.33 (official build), build id: 76A10A4F605EF849249F2E8673661F7254B779DA) (from thread 93) (query_id: 58b0300a-75f1-4455-bfbd-65d10bcd17f9) Received signal Segmentation fault (11)
2022.01.18 12:20:49.614108 [ 253 ] {} <Fatal> BaseDaemon: Address: 0x8 Access: write. Address not mapped to object.
2022.01.18 12:20:49.614122 [ 253 ] {} <Fatal> BaseDaemon: Stack trace: 0x130e5bd3 0x9bdfe58 0x130d34ad 0x130d7ac0 0x1313c948 0x15d6e96f 0x15d70d61 0x15e85709 0x15e82e40 0x7fa0ebd90609 0x7fa0ebc8a293
2022.01.18 12:20:49.614174 [ 253 ] {} <Fatal> BaseDaemon: 2. DB::CascadeWriteBuffer::nextImpl() @ 0x130e5bd3 in /usr/bin/clickhouse
2022.01.18 12:20:49.614184 [ 253 ] {} <Fatal> BaseDaemon: 3. DB::WriteBuffer::finalize() @ 0x9bdfe58 in /usr/bin/clickhouse
2022.01.18 12:20:49.614214 [ 253 ] {} <Fatal> BaseDaemon: 4. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x130d34ad in /usr/bin/clickhouse
2022.01.18 12:20:49.614227 [ 253 ] {} <Fatal> BaseDaemon: 5. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x130d7ac0 in /usr/bin/clickhouse
2022.01.18 12:20:49.614234 [ 253 ] {} <Fatal> BaseDaemon: 6. DB::HTTPServerConnection::run() @ 0x1313c948 in /usr/bin/clickhouse
2022.01.18 12:20:49.614241 [ 253 ] {} <Fatal> BaseDaemon: 7. Poco::Net::TCPServerConnection::start() @ 0x15d6e96f in /usr/bin/clickhouse
2022.01.18 12:20:49.614248 [ 253 ] {} <Fatal> BaseDaemon: 8. Poco::Net::TCPServerDispatcher::run() @ 0x15d70d61 in /usr/bin/clickhouse
2022.01.18 12:20:49.614254 [ 253 ] {} <Fatal> BaseDaemon: 9. Poco::PooledThread::run() @ 0x15e85709 in /usr/bin/clickhouse
2022.01.18 12:20:49.614261 [ 253 ] {} <Fatal> BaseDaemon: 10. Poco::ThreadImpl::runnableEntry(void*) @ 0x15e82e40 in /usr/bin/clickhouse
2022.01.18 12:20:49.614271 [ 253 ] {} <Fatal> BaseDaemon: 11. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2022.01.18 12:20:49.614282 [ 253 ] {} <Fatal> BaseDaemon: 12. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
**Additional context**
The problem is not reproducing when returning format is CSV.
| https://github.com/ClickHouse/ClickHouse/issues/33734 | https://github.com/ClickHouse/ClickHouse/pull/34454 | 4ec8da73c4446e4f94bd6c3428608fe793cd1a25 | e0dfc9cd3888d50e49913cf11cee2286dcdcbfe5 | "2022-01-18T10:19:56Z" | c++ | "2022-02-09T16:21:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,603 | ["src/Interpreters/RewriteSumIfFunctionVisitor.cpp", "tests/queries/0_stateless/02177_sum_if_not_found.reference", "tests/queries/0_stateless/02177_sum_if_not_found.sql"] | DB::Exception: Unknown function sumIF: While processing t, sumIF(n, 0) | ```sql
SELECT sumIF(1, 0)
Query id: 3a3db9bd-b0ad-4d2f-bd8b-39daeda0546c
ββcountIf(0)ββ
β 0 β
ββββββββββββββ
1 rows in set. Elapsed: 0.244 sec.
```
is OK.
```sql
CREATE TABLE agg
ENGINE = AggregatingMergeTree
ORDER BY tuple() AS
SELECT
t,
sumIF(n, 0)
FROM data
GROUP BY t
Query id: f47b729e-c574-4978-b565-aa805b7b896f
0 rows in set. Elapsed: 0.009 sec.
Received exception from server (version 21.13.1):
Code: 46. DB::Exception: Received from localhost:9000. DB::Exception: Unknown function sumIF: While processing t, sumIF(n, 0). (UNKNOWN_FUNCTION)
``` | https://github.com/ClickHouse/ClickHouse/issues/33603 | https://github.com/ClickHouse/ClickHouse/pull/33677 | 539b4f9b23de17837f05ab248f098c5b685c8e10 | d7a63dfda6b4d364a2d60429fb669bcac1c8dd1f | "2022-01-13T14:34:27Z" | c++ | "2022-01-22T14:12:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,592 | ["tests/queries/0_stateless/02176_toStartOfWeek_overflow_pruning.reference", "tests/queries/0_stateless/02176_toStartOfWeek_overflow_pruning.sql"] | Wrong result for query with minimal timestamp 1970-01-01 00:00:00 | During evaluation if `v21.12.3.32` is compatible with our current used release `v20.12.8.2` we observed that some queries does not return the expected rows anymore.
**Version**
```
SELECT version()
ββversion()βββ
β 21.12.3.32 β
ββββββββββββββ
```
**How to reproduce**
```
CREATE TABLE Test.A
(
`timestamp` DateTime('UTC'),
`a` Int
)
ENGINE = MergeTree
PARTITION BY toStartOfWeek(toDateTime(timestamp, 'UTC'))
ORDER BY timestamp
```
```
CREATE TABLE Test.B
(
`timestamp` DateTime('UTC'),
`a` Int
)
ENGINE = MergeTree
PARTITION BY toStartOfWeek(toDateTime(timestamp, 'UTC'))
ORDER BY timestamp
```
```
INSERT INTO Test.A VALUES (toDateTime(1559952000,'UTC'), 1)
```
**Expect** one returned row, but none is returned:
```
SELECT toUInt32(timestamp) AS ts
FROM
(
SELECT DISTINCT timestamp
FROM Test.A
)
ALL LEFT JOIN
(
SELECT DISTINCT
timestamp,
1 AS exists
FROM Test.B
) USING (timestamp)
WHERE (exists = 0) AND (timestamp >= '1970-01-01 00:00:00')
Query id: bbea4bca-fbf6-4b00-816a-54f85e3a63bc
Ok.
0 rows in set. Elapsed: 0.018 sec.
```
**Unexpected workaround**: Use '1970-01-**04** 00:00:00' instead of '1970-01-01 00:00:00'
```
SELECT toUInt32(timestamp) AS ts
FROM
(
SELECT DISTINCT timestamp
FROM Test.A
)
ALL LEFT JOIN
(
SELECT DISTINCT
timestamp,
1 AS exists
FROM Test.B
) USING (timestamp)
WHERE (exists = 0) AND (timestamp >= '1970-01-04 00:00:00')
Query id: f0338a9c-64db-4b3d-9c70-93314393233d
ββββββββββtsββ
β 1559952000 β
ββββββββββββββ
1 rows in set. Elapsed: 0.034 sec.
```
**Expected behavior**
Should work in the same way as in clickhouse release v20 with minimal timestamp 0 ('1970-01-01 00:00:00')
since we determine the minimum timestamp by the following query:
```
SELECT min(timestamp)
FROM Test.B
βββββββmin(timestamp)ββ
β 1970-01-01 00:00:00 β
βββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/33592 | https://github.com/ClickHouse/ClickHouse/pull/33658 | 8aed06c9227e3815d8793415ff2c815e14fd2f14 | 998b330507261606eb8bf4c95006d3463f3dd4bc | "2022-01-13T11:18:47Z" | c++ | "2022-01-15T13:31:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,536 | ["src/Processors/Formats/Impl/NativeFormat.cpp", "tests/queries/0_stateless/02187_async_inserts_all_formats.python", "tests/queries/0_stateless/02187_async_inserts_all_formats.reference", "tests/queries/0_stateless/02187_async_inserts_all_formats.sh", "tests/queries/0_stateless/helpers/pure_http_client.py"] | INSERT INTO FORMAT Native with async_insert=1 appears to succeed, but does nothing | I am trying to move to `async_insert` to avoid having a batching component merge many small inserts (the exact use case of `async_insert`). It works as intended over the HTTP protocol using both row-based input formats (CSV, TSV) and the column-based Parquet format (for testing, I generate Parquet data by piping CSV through clickhouse-local). However, if I try the more efficient and easier to generate Native input format, the HTTP query succeeds and returns a query id via the `X-ClickHouse-Query-Id` header, a corresponding entry in the `system.asynchronous_inserts` table is created and then deleted, and CH logs appear to indicate that the insert succeeded (see below), but no data is inserted into the table. If `async_insert` is set to 0, everything works correctly.
**How to reproduce**
CH version 21.12.3.32 (official build).
```
clickhouse-client -q "CREATE TABLE async_inserts (id UInt32, s String) ENGINE = Memory"
function insert() {
curl -i --data-binary @- "http://localhost:8123?query=INSERT+INTO+async_inserts+FORMAT+$1&async_insert=$2"
}
function convert() {
clickhouse-local --input-format=CSV -N t -S 'id UInt32,s String' -q "SELECT * FROM t FORMAT $1"
}
# with async_insert=1
echo '1, "a"' | insert CSV 1
echo '2, "b"' | convert Parquet | insert Parquet 1
echo '3, "c"' | convert Native | insert Native 1
# with async_insert=0
echo '4, "a"' | insert CSV 0
echo '5, "b"' | convert Parquet | insert Parquet 0
echo '6, "c"' | convert Native | insert Native 0
clickhouse-client -q "SELECT * FROM async_inserts FORMAT TSV"
# output:
# 1 a
# 2 b
# 4 a
# 5 b
# 6 c
```
**Example records from /var/log/clickhouse-server/clickhouse-server.log**
2022.01.11 21:35:54.615405 [ 9514 ] {d8f833c7-1b68-49ad-a8b4-abbb4e79c3e8} <Debug> executeQuery: (from [::ffff:127.0.0.1]:53242) INSERT INTO async_inserts FORMAT Native
2022.01.11 21:35:54.615449 [ 9514 ] {d8f833c7-1b68-49ad-a8b4-abbb4e79c3e8} <Trace> ContextAccess (default): Access granted: INSERT(id, s) ON default.async_inserts
2022.01.11 21:35:54.615500 [ 9514 ] {d8f833c7-1b68-49ad-a8b4-abbb4e79c3e8} <Trace> AsynchronousInsertQueue: Have 1 pending inserts with total 27 bytes of data for query 'INSERT INTO default.async_inserts FORMAT Native'
2022.01.11 21:35:54.817042 [ 9514 ] {d8f833c7-1b68-49ad-a8b4-abbb4e79c3e8} <Debug> DynamicQueryHandler: Done processing query
2022.01.11 21:35:54.817072 [ 9514 ] {d8f833c7-1b68-49ad-a8b4-abbb4e79c3e8} <Debug> MemoryTracker: Peak memory usage (for query): 3.00 MiB. | https://github.com/ClickHouse/ClickHouse/issues/33536 | https://github.com/ClickHouse/ClickHouse/pull/34068 | 379f8d3d7e531840503456c84cf000c14df578b0 | b950a12cb3d688453e9c916e2bb6f6e43ddd0013 | "2022-01-11T21:38:46Z" | c++ | "2022-01-28T22:24:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,390 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayFirst.cpp", "tests/queries/0_stateless/02161_array_first_last.reference", "tests/queries/0_stateless/02161_array_first_last.sql"] | Add arrayLast function (same as arrayFirst, but get the last element instead of the first) | Currently we need to write smth like
`select arrayFilter(...)[-1]`
to get the last element.
At the same time, we have `arrayFirst` function for the first element of array.
I think clickhouse community need analogy of function arrayFirst, but for last element.
My proposal is arrayLast function:
```
select range(10) as arr,
arrayFirst((a)->(a > 3 and a < 8), arr) as first_element,
arrayLast((a)->(a > 3 and a < 8), arr) as last_element
```
with result

| https://github.com/ClickHouse/ClickHouse/issues/33390 | https://github.com/ClickHouse/ClickHouse/pull/33415 | 6a89ca58e37f07d8cb70d7906c12a3e0681ca770 | 140d64f7854fdf688bd8720dea8947ee1df71d2a | "2022-01-04T12:38:57Z" | c++ | "2022-01-07T17:37:06Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.