status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,629 | ["src/Processors/QueryPlan/AggregatingStep.cpp", "tests/queries/0_stateless/02343_group_by_use_nulls.reference", "tests/queries/0_stateless/02343_group_by_use_nulls.sql"] | Bad cast from type DB::ColumnNullable to DB::ColumnVector<unsigned long> | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/0/66e677b307558eec0fa18444dca828983c5d3093/fuzzer_astfuzzerubsan//report.html\r\n\r\n```\r\n2022.08.25 13:46:47.095221 [ 212 ] {477e3605-187f-4cad-9574-998bde09e74f} <Debug> executeQuery: (from [::ffff:127.0.0.1]:37656) SELECT 0, sum(number) AS val FROM numbers(1048576) GROUP BY GROUPING SETS (('1025'), (number)) ORDER BY '0.2147483646' ASC, (number, number % 2, val) DESC SETTINGS group_by_use_nulls = 1 (stage: Complete)\r\n2022.08.25 13:46:47.095336 [ 212 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> ContextAccess (default): Access granted: CREATE TEMPORARY TABLE ON *.*\r\n2022.08.25 13:46:47.095758 [ 212 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> InterpreterSelectQuery: FetchColumns -> Complete\r\n2022.08.25 13:46:47.098240 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> AggregatingTransform: Aggregating\r\n2022.08.25 13:46:47.098243 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> AggregatingTransform: Aggregating\r\n2022.08.25 13:46:47.098269 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> Aggregator: Aggregation method: without_key\r\n2022.08.25 13:46:47.098288 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> Aggregator: Aggregation method: key64\r\n2022.08.25 13:46:47.098957 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Debug> AggregatingTransform: Aggregated. 1048576 to 1 rows (from 8.00 MiB) in 0.003004867 sec. (348959205.183 rows/sec., 2.60 GiB/sec.)\r\n2022.08.25 13:46:47.098979 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> Aggregator: Merging aggregated data\r\n2022.08.25 13:46:47.098995 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> Aggregator: Statistics updated for key=7339891029760931553: new sum_of_sizes=1, median_size=1\r\n2022.08.25 13:46:47.238693 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Debug> AggregatingTransform: Aggregated. 1048576 to 1048576 rows (from 8.00 MiB) in 0.142719459 sec. (7347113.052 rows/sec., 56.05 MiB/sec.)\r\n2022.08.25 13:46:47.238730 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> Aggregator: Merging aggregated data\r\n2022.08.25 13:46:47.238749 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Trace> Aggregator: Statistics updated for key=7339891029760931553: new sum_of_sizes=1048576, median_size=1048576\r\n2022.08.25 13:46:47.607124 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Debug> DiskLocal: Reserving 36.00 MiB on disk `_tmp_default`, having unreserved 83.77 GiB.\r\n2022.08.25 13:46:47.607259 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Information> MergeSortingTransform: Sorting and writing part of data into temporary file ./tmp/tmp126mkaaaa\r\n2022.08.25 13:46:47.607367 [ 448 ] {477e3605-187f-4cad-9574-998bde09e74f} <Information> MergeSortingTransform: There are 1 temporary sorted parts to merge.\r\n2022.08.25 13:46:47.643680 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Fatal> : Logical error: 'Bad cast from type DB::ColumnNullable to DB::ColumnVector<unsigned long>'.\r\n2022.08.25 13:48:59.940799 [ 212 ] {477e3605-187f-4cad-9574-998bde09e74f} <Information> TCPHandler: Query was cancelled.\r\n2022.08.25 13:48:59.941136 [ 450 ] {} <Fatal> BaseDaemon: (version 22.9.1.1 (official build), build id: C0F734BC13E32605) (from thread 449) (query_id: 477e3605-187f-4cad-9574-998bde09e74f) (query: SELECT 0, sum(number) AS val FROM numbers(1048576) GROUP BY GROUPING SETS (('1025'), (number)) ORDER BY '0.2147483646' ASC, (number, number % 2, val) DESC SETTINGS group_by_use_nulls = 1) Received signal Aborted (6)\r\n\r\n\r\n2022.08.25 13:46:47.643680 [ 449 ] {477e3605-187f-4cad-9574-998bde09e74f} <Fatal> : Logical error: 'Bad cast from type DB::ColumnNullable to DB::ColumnVector<unsigned long>'.\r\n2022.08.25 13:48:59.940978 [ 450 ] {} <Fatal> BaseDaemon: ########################################\r\n2022.08.25 13:48:59.941136 [ 450 ] {} <Fatal> BaseDaemon: (version 22.9.1.1 (official build), build id: C0F734BC13E32605) (from thread 449) (query_id: 477e3605-187f-4cad-9574-998bde09e74f) (query: SELECT 0, sum(number) AS val FROM numbers(1048576) GROUP BY GROUPING SETS (('1025'), (number)) ORDER BY '0.2147483646' ASC, (number, number % 2, val) DESC SETTINGS group_by_use_nulls = 1) Received signal Aborted (6)\r\n2022.08.25 13:48:59.941289 [ 450 ] {} <Fatal> BaseDaemon: \r\n2022.08.25 13:48:59.941472 [ 450 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f133f91100b 0x7f133f8f0859 0x1027d5e3 0x1027d88f 0x18c7e484 0x24b072d8 0x24ae8454 0x24b460e8 0x26fc4986 0x26fc41f3 0x273fef0f 0x26fd6ae5 0x270054a5 0x27004fcb 0x26ff505b 0x26ff6660 0x26ff6546 0x10348272 0x1034aec3 0x7f133fac8609 0x7f133f9ed133\r\n2022.08.25 13:48:59.941691 [ 450 ] {} <Fatal> BaseDaemon: 3. raise in ?\r\n2022.08.25 13:48:59.941932 [ 450 ] {} <Fatal> BaseDaemon: 4. abort in ?\r\n2022.08.25 13:48:59.969995 [ 450 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Common/Exception.cpp:47: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in /workspace/clickhouse\r\n2022.08.25 13:48:59.993479 [ 450 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/Exception.cpp:70: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) in /workspace/clickhouse\r\n2022.08.25 13:49:00.968140 [ 450 ] {} <Fatal> BaseDaemon: 7. DB::ColumnVector<unsigned long> const& typeid_cast<DB::ColumnVector<unsigned long> const&, DB::IColumn const>(DB::IColumn const&) in /workspace/clickhouse\r\n2022.08.25 13:49:00.981626 [ 450 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/DataTypes/Serializations/SerializationNumber.cpp:137: DB::SerializationNumber<unsigned long>::serializeBinaryBulk(DB::IColumn const&, DB::WriteBuffer&, unsigned long, unsigned long) const in /workspace/clickhouse\r\n2022.08.25 13:49:00.987036 [ 450 ] {} <Fatal> BaseDaemon: 9.1. inlined from ./build_docker/../contrib/libcxx/include/vector:1594: std::__1::vector<DB::ISerialization::Substream, std::__1::allocator<DB::ISerialization::Substream> >::pop_back()\r\n2022.08.25 13:49:00.987062 [ 450 ] {} <Fatal> BaseDaemon: 9. ../src/DataTypes/Serializations/SerializationNamed.cpp:55: DB::SerializationNamed::serializeBinaryBulkWithMultipleStreams(DB::IColumn const&, unsigned long, unsigned long, DB::ISerialization::SerializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::SerializeBinaryBulkState>&) const in /workspace/clickhouse\r\n2022.08.25 13:49:00.997745 [ 450 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/DataTypes/Serializations/SerializationTuple.cpp:0: DB::SerializationTuple::serializeBinaryBulkWithMultipleStreams(DB::IColumn const&, unsigned long, unsigned long, DB::ISerialization::SerializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::SerializeBinaryBulkState>&) const in /workspace/clickhouse\r\n2022.08.25 13:49:01.008365 [ 450 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Formats/NativeWriter.cpp:64: DB::writeData(DB::ISerialization const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, DB::WriteBuffer&, unsigned long, unsigned long) in /workspace/clickhouse\r\n2022.08.25 13:49:01.017867 [ 450 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Formats/NativeWriter.cpp:166: DB::NativeWriter::write(DB::Block const&) in /workspace/clickhouse\r\n2022.08.25 13:49:01.050545 [ 450 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Processors/Transforms/MergeSortingTransform.cpp:0: DB::BufferingToFileTransform::consume(DB::Chunk) in /workspace/clickhouse\r\n2022.08.25 13:49:01.062069 [ 450 ] {} <Fatal> BaseDaemon: 14.1. inlined from ./build_docker/../src/Processors/Chunk.h:32: ~Chunk\r\n2022.08.25 13:49:01.062096 [ 450 ] {} <Fatal> BaseDaemon: 14. ../src/Processors/IAccumulatingTransform.cpp:97: DB::IAccumulatingTransform::work() in /workspace/clickhouse\r\n2022.08.25 13:49:01.069074 [ 450 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:50: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) in /workspace/clickhouse\r\n2022.08.25 13:49:01.075486 [ 450 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:93: DB::ExecutionThreadContext::executeTask() in /workspace/clickhouse\r\n2022.08.25 13:49:01.090797 [ 450 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:228: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) in /workspace/clickhouse\r\n2022.08.25 13:49:01.107881 [ 450 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../contrib/libcxx/include/type_traits:3648: decltype(static_cast<DB::PipelineExecutor::spawnThreads()::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) in /workspace/clickhouse\r\n2022.08.25 13:49:01.124948 [ 450 ] {} <Fatal> BaseDaemon: 19.1. inlined from ./build_docker/../contrib/libcxx/include/tuple:0: operator()\r\n2022.08.25 13:49:01.124973 [ 450 ] {} <Fatal> BaseDaemon: 19. ../contrib/libcxx/include/type_traits:3640: decltype(static_cast<DB::PipelineExecutor::spawnThreads()::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) in /workspace/clickhouse\r\n2022.08.25 13:49:01.134093 [ 450 ] {} <Fatal> BaseDaemon: 20.1. inlined from ./build_docker/../contrib/libcxx/include/__functional/function.h:1157: std::__1::function<void ()>::operator=(std::nullptr_t)\r\n2022.08.25 13:49:01.134114 [ 450 ] {} <Fatal> BaseDaemon: 20. ../src/Common/ThreadPool.cpp:284: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) in /workspace/clickhouse\r\n2022.08.25 13:49:01.144858 [ 450 ] {} <Fatal> BaseDaemon: 21. ./build_docker/../src/Common/ThreadPool.cpp:0: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) in /workspace/clickhouse\r\n2022.08.25 13:49:01.144881 [ 450 ] {} <Fatal> BaseDaemon: 22. ? in ?\r\n2022.08.25 13:49:01.144930 [ 450 ] {} <Fatal> BaseDaemon: 23. __clone in ?\r\n2022.08.25 13:49:01.457050 [ 450 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: DBD24847576AE046A0B2EF4B7FE55D76)\r\n\r\n```\r\n" | https://github.com/ClickHouse/ClickHouse/issues/40629 | https://github.com/ClickHouse/ClickHouse/pull/40997 | 2587ba96c3262afd330a571d210353aacc064810 | 981e9dbce2fbcb68ab4700c20363846f7e04bbf5 | "2022-08-25T15:11:03Z" | c++ | "2022-09-06T22:08:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,621 | ["contrib/NuRaft", "docs/en/operations/clickhouse-keeper.md", "src/Coordination/CoordinationSettings.cpp", "src/Coordination/FourLetterCommand.cpp", "src/Coordination/FourLetterCommand.h", "src/Coordination/Keeper4LWInfo.h", "src/Coordination/KeeperDispatcher.h", "src/Coordination/KeeperServer.cpp", "src/Coordination/KeeperServer.h", "tests/integration/test_keeper_four_word_command/test.py"] | Support manually snapshot for Keeper. | > (you don't have to strictly follow this form)
**Use case**
Support manually snapshot for Keeper.
**Describe the solution you'd like**
Throught 4lw commands, such as `csnp`.
| https://github.com/ClickHouse/ClickHouse/issues/40621 | https://github.com/ClickHouse/ClickHouse/pull/41766 | e08f94d0f812e296fad969453d7076fe37312c24 | 9ac829d4c4beee74f3e11da2b9c16e0ac0399066 | "2022-08-25T12:42:03Z" | c++ | "2022-11-07T08:18:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,611 | ["src/Common/OvercommitTracker.cpp", "src/Common/OvercommitTracker.h", "src/Interpreters/ProcessList.cpp", "src/Interpreters/ProcessList.h"] | Strange hung in OvercommitTracker | Preprocessed traces: https://pastila.nl/?0006a09d/f3f1725be8a2ec8787b2d5c8b1eb6532
Report: https://s3.amazonaws.com/clickhouse-test-reports/40506/f2be18b3a644debbb882b8a9e8396b0375461b36/stress_test__undefined_.html | https://github.com/ClickHouse/ClickHouse/issues/40611 | https://github.com/ClickHouse/ClickHouse/pull/40677 | 2588901bc9178284f304cd6e3e72965e3da64554 | 0ec7f068ccbfe71e7e84d76bfbe2d835ef0540d8 | "2022-08-25T11:04:21Z" | c++ | "2022-08-30T10:54:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,599 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02420_key_condition_actions_dag_bug_40599.reference", "tests/queries/0_stateless/02420_key_condition_actions_dag_bug_40599.sql"] | 22.8.2.11 sql filter exception | > Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
> If you still prefer GitHub issues, remove all this text and ask your question here.
sqlγ1γ:
`select count()
from (
SELECT event_dt
FROM (select event_dt, 403 AS event_id
from (
select event_dt
from tba as tba
where event_id = 9
and ((tba.event_dt >= 20220822 and tba.event_dt <= 20220822))
)) tba
WHERE tba.event_dt >= 20220822
and tba.event_dt <= 20220822
);`
sqlγ2γ:
`select count()
from (
SELECT event_dt
FROM (select event_dt, 403 AS event_id
from (
select event_dt
from tba as tba
where event_id = 9
and ((tba.event_dt >= 20220822 and tba.event_dt <= 20220822))
)) tba
WHERE tba.event_dt >= 20220822
and tba.event_dt <= 20220822
and event_id = 403
);`
why sqlγ2γresult less data | https://github.com/ClickHouse/ClickHouse/issues/40599 | https://github.com/ClickHouse/ClickHouse/pull/41281 | 9597a470169e68cd0a380bed6db9780da5b329b3 | 16f78eb8043444f8b50ede478b833fad4379292d | "2022-08-25T06:46:22Z" | c++ | "2022-09-16T14:06:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,595 | ["src/Access/Common/AllowedClientHosts.cpp", "src/Common/CaresPTRResolver.cpp", "src/Common/CaresPTRResolver.h", "src/Common/DNSPTRResolver.h", "src/Common/DNSResolver.cpp", "src/Common/DNSResolver.h", "tests/integration/test_host_regexp_hosts_file_resolution/__init__.py", "tests/integration/test_host_regexp_hosts_file_resolution/configs/host_regexp.xml", "tests/integration/test_host_regexp_hosts_file_resolution/configs/listen_host.xml", "tests/integration/test_host_regexp_hosts_file_resolution/test.py"] | 22.8.2 version users.xml <host_regexp> abnormal | > Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
> If you still prefer GitHub issues, remove all this text and ask your question here.
` <networks incl="networks" replace="replace">
<ip>127.0.0.1</ip>
<host_regexp>^abc00[1-6]$</host_regexp>
</networks>
`
`clickhouse-client --host abc003
ClickHouse client version 22.8.2.11 (official build).
Connecting to abc003:9000 as user default.
If you have installed ClickHouse and forgot password you can reset it in the configuration file.
The password for default user is typically located at /etc/clickhouse-server/users.d/default-password.xml
and deleting this file will reset the password.
See also /etc/clickhouse-server/users.xml on the server where ClickHouse is installed.
Code: 516. DB::Exception: Received from abc003:9000. DB::Exception: default: Authentication failed: password is incorrect or there is no user with such name. (AUTHENTICATION_FAILED)`
`if use <host> no problem` | https://github.com/ClickHouse/ClickHouse/issues/40595 | https://github.com/ClickHouse/ClickHouse/pull/40769 | be81d21fdfa30523a4310fd554df70140d40a7de | 022f440ad08bb1c193a3d9ea3a708f5c2866ef60 | "2022-08-25T01:17:19Z" | c++ | "2022-08-30T11:35:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,593 | ["docker/test/stateless/run.sh", "tests/clickhouse-test"] | Check what components are not covered by tests | After functional tests are finished, run the following query:
```
SELECT name FROM system.functions WHERE NOT is_aggregate AND origin = 'System' AND alias_to = ''
AND name NOT IN (SELECT arrayJoin(used_functions) FROM system.query_log)
ORDER BY name
```
This will give you a list of functions not covered by tests.
Check it for every component that is tracked in system.query_log:
```
used_aggregate_functions
used_aggregate_function_combinators
used_database_engines
used_data_type_families
used_dictionaries
used_formats
used_functions
used_storages
used_table_functions
``` | https://github.com/ClickHouse/ClickHouse/issues/40593 | https://github.com/ClickHouse/ClickHouse/pull/40647 | f86242c17e48c50b348f98326fd366c064b57a70 | 7bd1142f63c777fba545a2d36818d6ecb382d8a3 | "2022-08-24T22:03:03Z" | c++ | "2022-08-27T20:13:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,587 | ["src/Common/base58.h", "src/Functions/FunctionBase58Conversion.h", "tests/queries/0_stateless/02337_base58.reference", "tests/queries/0_stateless/02337_base58.sql"] | base58Encode does not work correctly if a string starts with null byte | `base58Encode` does not encode the data correctly. Compare output from `base58Encode` to the one from `hex`:
```
:) select base58Encode('\x00\x0b\xe3\xe1\xeb\xa1\x7a\x47\x3f\x89\xb0\xf7\xe8\xe2\x49\x40\xf2\x0a\xeb\x8e\xbc\xa7\x1a\x88\xfd\xe9\x5d\x4b\x83\xb7\x1a\x09') as encoded_data format JSONEachRow;
{"encoded_data":""}
1 row in set. Elapsed: 0.002 sec.
:) select hex('\x00\x0b\xe3\xe1\xeb\xa1\x7a\x47\x3f\x89\xb0\xf7\xe8\xe2\x49\x40\xf2\x0a\xeb\x8e\xbc\xa7\x1a\x88\xfd\xe9\x5d\x4b\x83\xb7\x1a\x09') as encoded_data format JSONEachRow;
{"encoded_data":"000BE3E1EBA17A473F89B0F7E8E24940F20AEB8EBCA71A88FDE95D4B83B71A09"}
1 row in set. Elapsed: 0.002 sec.
```
**Does it reproduce on recent release?**
Yes. Reproduced on ClickHouse version `22.8.1.2097`.
**Expected behavior**
Python encodes the data as follows:
```
$ python3 -m venv ~/.venv/base58
$ ~/.venv/base58/bin/pip install base58
$ ~/.venv/base58/bin/python3
>>> import base58
>>> data = b'\x00\x0b\xe3\xe1\xeb\xa1\x7a\x47\x3f\x89\xb0\xf7\xe8\xe2\x49\x40\xf2\x0a\xeb\x8e\xbc\xa7\x1a\x88\xfd\xe9\x5d\x4b\x83\xb7\x1a\x09';
>>> len(data)
32
>>> base58.b58encode(data)
b'1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix'
>>> len(base58.b58encode(data))
43
```
so I expected base58Encode call to return '1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix'.
Note: I am not sure, but the issue might have the same root cause as the one from https://github.com/ClickHouse/ClickHouse/issues/40536.
Thank you. | https://github.com/ClickHouse/ClickHouse/issues/40587 | https://github.com/ClickHouse/ClickHouse/pull/40620 | c9dea66f8d37c13e27678eeeec246c6a97e40e67 | 1c8c83ccf3f37d30d879fd88be3cd9bd21225b37 | "2022-08-24T16:25:42Z" | c++ | "2022-08-26T08:18:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,573 | ["docs/en/sql-reference/functions/arithmetic-functions.md", "docs/ru/sql-reference/functions/arithmetic-functions.md", "src/Functions/FunctionsDecimalArithmetics.cpp", "src/Functions/FunctionsDecimalArithmetics.h", "tests/queries/0_stateless/02475_precise_decimal_arithmetics.reference", "tests/queries/0_stateless/02475_precise_decimal_arithmetics.sql"] | Add precise Decimal division | Right now, `Decimal` devision (#30341, #38054) can cause some pain.
It works fine, but with strict limitations -- e.g. we cannot divide Decimals that have scale more than half of max scale (#39600).
Currently used division has good speed -- in fact, we just make 1 division (expensive) and multiplication (not so expensive), but precision is poor.
I suggest adding separate function (smth like `preciseDecimalDiv(a: Decimal, b: Decimal, prcsn: UInt8)` with user-defined result precision) for decimal division (alongside "usual" one).
Maybe even we can replace currently existing division by this more precise version, but it will be like 10 times slower (more division operations are to be made), and it may be unacceptable if you don't need high precision.
That's why I suggest having 2 separate functions. | https://github.com/ClickHouse/ClickHouse/issues/40573 | https://github.com/ClickHouse/ClickHouse/pull/42438 | 58557c87c23535e9831695b39bf038cf68524d1d | 87fcf1b5dbb0449d472b71f13edaa3e6326212aa | "2022-08-24T11:02:54Z" | c++ | "2022-11-25T16:35:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,571 | ["contrib/NuRaft", "src/Coordination/KeeperServer.cpp", "src/Coordination/KeeperStorage.cpp", "src/Coordination/KeeperStorage.h"] | Clickhouse Keeper v22.7 crashing on every startup with segfault 11 | **Describe what's wrong**
Clickhouse keeper is crashing during initialization with segfault. This repeats on every start - we're unable to start the keeper server, either from existing data or from the new ones, it always crashes after a few minutes of initializing. We're running a standalone dockerized keeper instance.
**Does it reproduce on recent release?**
Yes, we're using the version v22.7 of keeper.
**How to reproduce**
* Which ClickHouse server version to use:
We tested these versions (so far) and all are crashing with the same error
```
v22.7.1.2484
v22.7.4.16
v22.8.2.11
---
v22.5.1.2079
```
* Non-default settings, if any:
We tweaked some configurations which may be related, though it's inconclusive on our side:
```
<coordination_settings>
<force_sync>false</force_sync>
<max_requests_batch_size>2000</max_requests_batch_size>
</coordination_settings>
```
**Expected behavior**
We don't expect the process to crash.
**Error message and/or stacktrace**
```
successfully receive a snapshot (idx 4149762924 term 335) from leader
Compact logs up to log index 4149762924, our max log id is 4148717923
Seems like this node recovers from leaders snapshot, removing all logs
Removing changelog /var/lib/clickhouse/coordination/log/changelog_4148562925_4148662924.bin.zstd because of compaction
Trying to remove log /var/lib/clickhouse/coordination/log/changelog_4148662925_4148762924.bin.zstd which is current active log for write. Possibly this node recovers from snapshot
Removing changelog /var/lib/clickhouse/coordination/log/changelog_4148662925_4148762924.bin.zstd because of compaction
Removed changelog /var/lib/clickhouse/coordination/log/changelog_4148562925_4148662924.bin.zstd because of compaction.
Removed changelog /var/lib/clickhouse/coordination/log/changelog_4148662925_4148762924.bin.zstd because of compaction.
Compaction up to 4149762924 finished new min index 4149762925, new max index 4149762924
successfully compact the log store, will now ask the statemachine to apply the snapshot
########################################
(version 22.7.1.2484 (official build), build id: BB14295F0BE31ECF) (from thread 66) (no query) Received signal Segmentation fault (11)
Address: NULL pointer. Access: read. Address not mapped to object.
Stack trace: 0xa5860d
0. ? @ 0xa5860d in /usr/bin/clickhouse-keeper
Integrity check of the executable skipped because the reference checksum could not be read. (calculated checksum: E4590F1FEA25C5B140060D818924BBD1)
```
| https://github.com/ClickHouse/ClickHouse/issues/40571 | https://github.com/ClickHouse/ClickHouse/pull/40627 | e98ceb2575133ff1218fb79df3558089814aa223 | cecdcb50598aad333d8cb784b914a19a567e0b88 | "2022-08-24T10:50:52Z" | c++ | "2022-09-01T07:13:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,558 | ["docs/en/operations/tips.md", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Update File System Section of Clickhouse Documentation | I was trying to use ClickHouse on an exFAT storage system, but every mutation query was throwing an error.
This thread (https://github.com/Altinity/clickhouse-operator/issues/813) helped me understand that Clickhouse requires a file system that has hard linking (which to my understanding exFAT does not support).
It would have been nice to know that ClickHouse requires a file system that supports hard linking. I think here (https://clickhouse.com/docs/en/operations/tips/#file-system) would be a good place to put the information.
Thank you all for your hard work on this awesome project! | https://github.com/ClickHouse/ClickHouse/issues/40558 | https://github.com/ClickHouse/ClickHouse/pull/40967 | 507ad0c4d9c1b331a35dc9c91e79aeb9bc96ed35 | 928c1cd0d4cf47d83dc085ac89aaae6dc1802903 | "2022-08-23T22:27:24Z" | c++ | "2022-09-08T13:56:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,547 | ["utils/self-extracting-executable/decompressor.cpp"] | Self-extracting: possible race condition on decompression. | **Describe the unexpected behaviour**
When compressed clickhouse started multiple times it is possible (and maybe quite probable) that some of them will end up with errors. It happens due to the fact that decompression process is trying to manipulate files in order to substitute compressed clickhouse with decompressed and run it.
This issue can affect CI environment leading to seemingly unrelated to PR tests failures.
**How to reproduce**
Run compressed clickhouse multiple times.
**Expected behavior**
No errors related to decompression and process startup.
**Error message and/or stacktrace**
Various - depends on the clashing stages of decompression process.
| https://github.com/ClickHouse/ClickHouse/issues/40547 | https://github.com/ClickHouse/ClickHouse/pull/40591 | 45fe9c800a1ca05edcc3a7ffaefab503e999f6b5 | 6815d7910266d0862118cd1d1519966438d3beaa | "2022-08-23T15:13:22Z" | c++ | "2022-09-02T10:26:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,536 | ["src/Common/base58.h", "src/Functions/FunctionBase58Conversion.h", "tests/queries/0_stateless/02337_base58.reference", "tests/queries/0_stateless/02337_base58.sql"] | base58Decode produces incorrect output | It looks like ClickHouse's base58Decode function works incorrectly.
For example, base58-encoded value `1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix` should be decoded to a 32 byte array. Here is an example from python which works correctly:
```
$ python3 -m venv ~/.venv/base58
$ ~/.venv/base58/bin/pip install base58
$ ~/.venv/base58/bin/python3
>>> import base58
>>> base58.b58decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')
b'\x00\x0b\xe3\xe1\xeb\xa1zG?\x89\xb0\xf7\xe8\xe2I@\xf2\n\xeb\x8e\xbc\xa7\x1a\x88\xfd\xe9]K\x83\xb7\x1a\t'
>>> len(base58.b58decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix'))
32
>>> base58.b58decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix').hex()
'000be3e1eba17a473f89b0f7e8e24940f20aeb8ebca71a88fde95d4b83b71a09'
```
Clickhouse produces incorrect, 31 byte string (notice that the leading NULL byte is missing):
```
:) SELECT length(base58Decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')) FORMAT Vertical;
Row 1:
ββββββ
length(base58Decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')): 31
:) select hex(base58Decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')) format Vertical;
Row 1:
ββββββ
hex(base58Decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')): 0BE3E1EBA17A473F89B0F7E8E24940F20AEB8EBCA71A88FDE95D4B83B71A09
```
and decode/encode pair does not produce original value:
```
:) SELECT base58Encode(base58Decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')) FORMAT Vertical;
Row 1:
ββββββ
base58Encode(base58Decode('1BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix')): BWutmTvYPwDtmw9abTkS4Ssr8no61spGAvW1X6NDix
```
**Does it reproduce on recent release?**
Yes. The bug was reproduced on:
```
:) select version() format Vertical;
Row 1:
ββββββ
version(): 22.8.1.2097
```
**How to reproduce**
See above.
**Expected behavior**
* base58Decode should produce correct output
* base58Decode/base58Encode pair should return original value | https://github.com/ClickHouse/ClickHouse/issues/40536 | https://github.com/ClickHouse/ClickHouse/pull/40620 | c9dea66f8d37c13e27678eeeec246c6a97e40e67 | 1c8c83ccf3f37d30d879fd88be3cd9bd21225b37 | "2022-08-23T11:48:41Z" | c++ | "2022-08-26T08:18:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,333 | ["src/Columns/ColumnObject.cpp", "src/Columns/ColumnObject.h", "src/DataTypes/ObjectUtils.cpp", "src/DataTypes/ObjectUtils.h", "src/DataTypes/Serializations/SerializationObject.cpp", "tests/queries/0_stateless/02287_type_object_convert.reference", "tests/queries/0_stateless/02287_type_object_convert.sql"] | ClickHouse terminates on JSONAsObject | ClickHouse terminates on executing - version `22.8.1.1917`
`SELECT * FROM url('https://datahub.io/core/geo-countries/r/0.geojson', JSONAsObject)`
works as expected with JSONAsString
```
2022.08.18 08:54:45.969150 [ 459927 ] {} <Fatal> BaseDaemon: 16.1. inlined from ./build_docker/../contrib/libcxx/include/vector:931: std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >::__vdeallocate()
2022.08.18 08:54:45.969209 [ 459927 ] {} <Fatal> BaseDaemon: 16.2. inlined from ../contrib/libcxx/include/vector:1275: std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >::__move_assign(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&, std::__1::integral_constant<bool, true>)
2022.08.18 08:54:45.969248 [ 459927 ] {} <Fatal> BaseDaemon: 16.3. inlined from ../contrib/libcxx/include/vector:1251: std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >::operator=(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&&)
2022.08.18 08:54:45.969273 [ 459927 ] {} <Fatal> BaseDaemon: 16.4. inlined from ../src/Processors/Chunk.h:53: DB::Chunk::operator=(DB::Chunk&&)
2022.08.18 08:54:45.969291 [ 459927 ] {} <Fatal> BaseDaemon: 16. ../src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp:91: DB::ParallelParsingInputFormat::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x15dea232 in /var/lib/clickhouse/clickhouse
2022.08.18 08:54:45.996892 [ 459927 ] {} <Fatal> BaseDaemon: 17.1. inlined from ./build_docker/../contrib/libcxx/include/__functional/function.h:832: std::__1::__function::__policy_func<void ()>::operator=(std::nullptr_t)
2022.08.18 08:54:45.996940 [ 459927 ] {} <Fatal> BaseDaemon: 17.2. inlined from ../contrib/libcxx/include/__functional/function.h:1157: std::__1::function<void ()>::operator=(std::nullptr_t)
2022.08.18 08:54:45.996963 [ 459927 ] {} <Fatal> BaseDaemon: 17. ../src/Common/ThreadPool.cpp:284: ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa440b28 in /var/lib/clickhouse/clickhouse
2022.08.18 08:54:46.025058 [ 459927 ] {} <Fatal> BaseDaemon: 18.1. inlined from ./build_docker/../src/Common/ThreadPool.cpp:0: operator()
2022.08.18 08:54:46.025110 [ 459927 ] {} <Fatal> BaseDaemon: 18.2. inlined from ../contrib/libcxx/include/type_traits:3640: decltype(static_cast<void>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'()&>(void&&)
2022.08.18 08:54:46.025144 [ 459927 ] {} <Fatal> BaseDaemon: 18.3. inlined from ../contrib/libcxx/include/__functional/invoke.h:61: void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'()&>(ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'()&)
2022.08.18 08:54:46.025176 [ 459927 ] {} <Fatal> BaseDaemon: 18.4. inlined from ../contrib/libcxx/include/__functional/function.h:230: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'(), void ()>::operator()()
2022.08.18 08:54:46.025206 [ 459927 ] {} <Fatal> BaseDaemon: 18. ../contrib/libcxx/include/__functional/function.h:711: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0xa442597 in /var/lib/clickhouse/clickhouse
2022.08.18 08:54:46.051554 [ 459927 ] {} <Fatal> BaseDaemon: 19.1. inlined from ./build_docker/../contrib/libcxx/include/__functional/function.h:832: std::__1::__function::__policy_func<void ()>::operator=(std::nullptr_t)
2022.08.18 08:54:46.051593 [ 459927 ] {} <Fatal> BaseDaemon: 19.2. inlined from ../contrib/libcxx/include/__functional/function.h:1157: std::__1::function<void ()>::operator=(std::nullptr_t)
2022.08.18 08:54:46.051616 [ 459927 ] {} <Fatal> BaseDaemon: 19. ../src/Common/ThreadPool.cpp:284: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa43e44c in /var/lib/clickhouse/clickhouse
2022.08.18 08:54:46.079663 [ 459927 ] {} <Fatal> BaseDaemon: 20.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:312: std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >::reset(std::__1::__thread_struct*)
2022.08.18 08:54:46.079704 [ 459927 ] {} <Fatal> BaseDaemon: 20.2. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:269: ~unique_ptr
2022.08.18 08:54:46.079726 [ 459927 ] {} <Fatal> BaseDaemon: 20.3. inlined from ../contrib/libcxx/include/tuple:210: ~__tuple_leaf
2022.08.18 08:54:46.079743 [ 459927 ] {} <Fatal> BaseDaemon: 20.4. inlined from ../contrib/libcxx/include/tuple:470: ~tuple
2022.08.18 08:54:46.079789 [ 459927 ] {} <Fatal> BaseDaemon: 20.5. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:54: std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >::operator()(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>*) const
2022.08.18 08:54:46.079846 [ 459927 ] {} <Fatal> BaseDaemon: 20.6. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:315: std::__1::unique_ptr<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>, std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> > >::reset(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>*)
2022.08.18 08:54:46.079897 [ 459927 ] {} <Fatal> BaseDaemon: 20.7. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:269: ~unique_ptr
2022.08.18 08:54:46.079942 [ 459927 ] {} <Fatal> BaseDaemon: 20. ../contrib/libcxx/include/thread:295: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xa4416dd in /var/lib/clickhouse/clickhouse
2022.08.18 08:54:46.079989 [ 459927 ] {} <Fatal> BaseDaemon: 21. ? @ 0x7f989919a609 in ?
2022.08.18 08:54:46.080034 [ 459927 ] {} <Fatal> BaseDaemon: 22. clone @ 0x7f98990bf163 in ?
2022.08.18 08:54:46.231250 [ 459927 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: AFC43756B281818FB7C9D3A2CD17E62A)
``` | https://github.com/ClickHouse/ClickHouse/issues/40333 | https://github.com/ClickHouse/ClickHouse/pull/40483 | ddac1b3f113bd011c6de73a2525756ea21e6054d | 5a3e24c4e4c23b4e56a89ac0c06a0b357d92d3c7 | "2022-08-18T08:58:26Z" | c++ | "2022-08-31T14:07:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,330 | ["src/Interpreters/Context.cpp", "tests/integration/test_log_levels_update/configs/log.xml", "tests/integration/test_log_levels_update/test.py"] | Warning suppression mechanism | I have ClickHouse server running in virtual environment that reports the following warning:
```
# clickhouse-client
ClickHouse client version 22.7.2.15 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.7.2 revision 54457.
Warnings:
* Linux is not using a fast TSC clock source. Performance can be degraded. Check /sys/devices/system/clocksource/clocksource0/current_clocksource
# cat /sys/devices/system/clocksource/clocksource0/current_clocksource
kvm-clock
```
It looks like a false positive as kvm-clock is considered a preferable clock source for kvm-based VMs.
https://opensource.com/article/17/6/timekeeping-linux-vms
Regardless of whether it's a bug or not, I'd like to have some mechanism that allows to disable particular warnings. There are may be other questionable warnings in the future.
| https://github.com/ClickHouse/ClickHouse/issues/40330 | https://github.com/ClickHouse/ClickHouse/pull/40548 | a16d4dd605e74c80a59be8bc2012f3aba0268876 | 5cbe7e08464a0ed2744551eec8d4ac8f03dc4e75 | "2022-08-18T08:47:44Z" | c++ | "2022-08-29T12:02:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,325 | [".github/workflows/backport_branches.yml", ".github/workflows/docs_check.yml", ".github/workflows/pull_request.yml", "tests/ci/merge_pr.py"] | Automatically merge the best pull requests | If a pull request is approved and has no modifications after it is approved and all the checks are green and there are more than 100 checks, merge automatically.
https://github.com/ClickHouse/ClickHouse/blob/be29057de1835f6f4a17e03a422b45b81efe6833/docs/ru/whats-new/extended-roadmap.md#735-%D0%BD%D0%B0%D1%87%D0%B0%D0%BB%D1%8C%D0%BD%D1%8B%D0%B5-%D0%BF%D1%80%D0%B0%D0%B2%D0%B8%D0%BB%D0%B0-%D0%B4%D0%BB%D1%8F-%D0%B0%D0%B2%D1%82%D0%BE-merge-nachalnye-pravila-dlia-avto-merge | https://github.com/ClickHouse/ClickHouse/issues/40325 | https://github.com/ClickHouse/ClickHouse/pull/41110 | 75318e4cee34c2757f96322c797f3e00022ce2fb | e6b3f54842290cbca46808791370b4b13bd9ffde | "2022-08-18T06:25:32Z" | c++ | "2023-01-13T20:16:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,280 | ["src/Client/ClientBase.cpp"] | / (shortcut) to repeat the last (previous) SQL command (clickHouse-client) | Oracle cli client (sqlplus) has a neat feature, if you put `/[enter]` in the prompt, it re-executes the last SQL.
```sql
:) select 1;
ββ1ββ
β 1 β
βββββ
:) /
ββ1ββ
β 1 β
βββββ
```
currently `<up><enter>` pollutes the screen by the SQL text and can contain SQL script, not a single query. Probably echo should be disabled in this case as well. | https://github.com/ClickHouse/ClickHouse/issues/40280 | https://github.com/ClickHouse/ClickHouse/pull/40750 | 56eece40ec3a91eeb156e85c655b19293d40e558 | b3fc6eafe7098050d3b5a9046e3734b721be8c2b | "2022-08-16T16:23:08Z" | c++ | "2022-09-01T08:27:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,105 | ["src/Common/ErrorCodes.cpp", "src/Storages/MergeTree/MergeTreeSettings.h", "src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeAttachThread.h", "src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.h", "tests/integration/test_replicated_database/configs/config.xml", "tests/integration/test_replicated_database/test.py"] | ReplicatedMergeTree tables should be able to start up if ZooKeeper is configured but unavailable | **Describe the solution you'd like**
We already can run clickhouse-server without configured ZooKeeper (the tables will start in read-only mode)
and with configured ZooKeeper, if it starts to be inaccessible at runtime (the tables will be read-only until the session is re-established).
Let's also make it possible to start up when ZooKeeper is unavailable.
**Additional context**
If that will be very hard to implement, we can consider other options:
- lazy attaching of the tables after sever start-up;
- a maintenance mode when the tables are not attached at start-up and require manual ATTACH queries. | https://github.com/ClickHouse/ClickHouse/issues/40105 | https://github.com/ClickHouse/ClickHouse/pull/40148 | 13de3d0665545ba6d34a8b1e65b7f7f6670f1eb5 | eae2667a1c29565c801be0ffd465f8bfcffe77ef | "2022-08-11T06:03:50Z" | c++ | "2022-08-25T16:39:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,097 | ["src/Functions/array/arrayDifference.cpp", "tests/queries/0_stateless/01716_array_difference_overflow.reference", "tests/queries/0_stateless/01716_array_difference_overflow.sql"] | arrayDifference function didn't get expected result on type UInt32 | Clickhouse Server version: 22.3.2.1 and 21.6.5.37
Minimal Reproducible Code:
`select
arrayJoin(arrayDifference(groupArray(nb))) as res
from
(
select
toUInt32(number % 11) as nb
from
numbers(100))`
When I using arrayDifference function on an array of UInt32 elements, I get huge numbers around 2^32 instead of expected negative numbers.
The same function works properly on Uint8, Uint16 and even Uint64.
And also I found
> `The type of elements in the resulting array is determined by the type inference rules for subtraction (e.g. UInt8 - UInt8 = Int16).`
in document at [https://clickhouse.com/docs/en/sql-reference/functions/array-functions/#arraydifference](url)
So it seems like an unexpected result, but I'm not sure. Hoping for your anwser! | https://github.com/ClickHouse/ClickHouse/issues/40097 | https://github.com/ClickHouse/ClickHouse/pull/40211 | 3a5f05bd227d8ccd5d634bad5fc09ce3ec1cd0bc | ad936ae32a591759cefde5e79d7360bfebbc3e7a | "2022-08-11T02:54:44Z" | c++ | "2022-08-15T01:52:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,021 | ["src/Interpreters/Set.cpp", "src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02416_in_set_same_ast_diff_columns.reference", "tests/queries/0_stateless/02416_in_set_same_ast_diff_columns.sql"] | KeyCondition: `index out of bounds` when extracting prepared set | ```
../contrib/libcxx/include/vector:1449: _LIBCPP_ASSERT '__n < size()' failed. vector[] index out of bounds Received signal 6 Received signal Aborted (6)
```
https://s3.amazonaws.com/clickhouse-test-reports/39973/24ee3e022df7e970b5925fea657bec3ad7303702/fuzzer_astfuzzerdebug//report.html
| https://github.com/ClickHouse/ClickHouse/issues/40021 | https://github.com/ClickHouse/ClickHouse/pull/40850 | 7dea71c83f902e40c45f62f8eade627ec638c8fd | fdcced8962fa481d30cccd6994bf5fe970dd5eb0 | "2022-08-09T09:21:34Z" | c++ | "2022-09-02T18:40:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 40,014 | ["src/Storages/StorageMerge.cpp", "tests/queries/0_stateless/02402_merge_engine_with_view.reference", "tests/queries/0_stateless/02402_merge_engine_with_view.sql"] | Merge Table Engine does not use the indexes of tables. | We use Merge Table Engine to merge a ReplicatedMergeTree Engine table and a normal view without Distributed Table Engine based on another ReplicatedMergeTree Engine table.
And we found that the Merge Table Engine does not use the indexes of view.
But if we use Distributed Table Engine to make these tables and views distributed, the Merge Table Engine will use their indexes.
So what is the difference between these two modes?
ClickHouse version: 21.9.2.17
| https://github.com/ClickHouse/ClickHouse/issues/40014 | https://github.com/ClickHouse/ClickHouse/pull/40233 | 7c18d5b34f62d45e46d3ab2eabe3dea0d8cc3777 | 24682556a4d0fe7fd7f8cb31fa479a576b9829e6 | "2022-08-09T03:33:06Z" | c++ | "2022-08-19T12:24:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,989 | ["tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/counter.clj"] | Jepsen: invalid analysis | https://s3.amazonaws.com/clickhouse-test-reports/0/b4f5d9ca10b2ee4285fce354c232b8d50c829380/clickhouse_keeper_jepsen.html
cc: @antonio2368 | https://github.com/ClickHouse/ClickHouse/issues/39989 | https://github.com/ClickHouse/ClickHouse/pull/39992 | be69169f9712931e93d0ede6fa92e6ce9464a19f | 4b4b5e91b2af30a2c3cd4aef6dae9bcd79502e5a | "2022-08-08T13:04:36Z" | c++ | "2022-08-11T18:38:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,987 | ["src/Functions/FunctionsAES.h", "tests/queries/0_stateless/02384_decrypt_bad_arguments.reference", "tests/queries/0_stateless/02384_decrypt_bad_arguments.sql"] | FunctionDecrypt: Too large size ... passed to allocator. It indicates an error | https://s3.amazonaws.com/clickhouse-test-reports/0/73f4643afa8f135f7a7f0e39be41a09ccc3c8d96/fuzzer_astfuzzerubsan//report.html
```
2022.08.08 03:52:55.453638 [ 186 ] {cc0a6186-fc89-4e20-b705-9b884df4b79f} <Fatal> : Logical error: 'Too large size (18446603393237392928) passed to allocator. It indicates an error.'.
2022.08.08 03:55:28.284145 [ 448 ] {} <Fatal> BaseDaemon: ########################################
2022.08.08 03:55:28.284189 [ 448 ] {} <Fatal> BaseDaemon: (version 22.8.1.1 (official build), build id: 696C34768D6D0C4D) (from thread 186) (query_id: cc0a6186-fc89-4e20-b705-9b884df4b79f) (query: SELECT 0., decrypt('aes-128-gcm', [1024, 65535, NULL, NULL, 9223372036854775807, 1048576, NULL], 'text', 'key', 'IV')) Received signal Aborted (6)
2022.08.08 03:55:28.284212 [ 448 ] {} <Fatal> BaseDaemon:
2022.08.08 03:55:28.284271 [ 448 ] {} <Fatal> BaseDaemon: Stack trace: 0x7ff43edf400b 0x7ff43edd3859 0x101a2c03 0x101a2eaf 0x102040a8 0x10203614 0x1021c397 0x10217090 0x17fe9a47 0x17fe96d9 0x17fe8cd4 0x15b19d3f 0x15b18b46 0x243ff5f9 0x243feccb 0x243ff4f8 0x244006ed 0x2440238b 0x24ca2236 0x251a39ec 0x251b7094 0x251a974b 0x251ae5bc 0x25180628 0x25168950 0x251749b9 0x2517a242 0x25955c98 0x25947eac 0x259407e9 0x2593c900 0x259c08e8 0x259bd20e 0x259bad11 0x258e37aa 0x258e1ee9 0x25df079d 0x25ded92d 0x26ef18b0 0x26f10c9a 0x27e3e5cc 0x27e3eaba
2022.08.08 03:55:28.284315 [ 448 ] {} <Fatal> BaseDaemon: 3. gsignal in ?
2022.08.08 03:55:28.284336 [ 448 ] {} <Fatal> BaseDaemon: 4. abort in ?
2022.08.08 03:55:28.311088 [ 448 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Common/Exception.cpp:47: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in /workspace/clickhouse
2022.08.08 03:55:28.336511 [ 448 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/Exception.cpp:70: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) in /workspace/clickhouse
2022.08.08 03:55:28.349377 [ 448 ] {} <Fatal> BaseDaemon: 7.1. inlined from ./build_docker/../contrib/libcxx/include/string:1445: std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__is_long() const
2022.08.08 03:55:28.349429 [ 448 ] {} <Fatal> BaseDaemon: 7.2. inlined from ../contrib/libcxx/include/string:2231: ~basic_string
2022.08.08 03:55:28.349448 [ 448 ] {} <Fatal> BaseDaemon: 7. ../src/Common/Exception.h:37: DB::Exception::Exception<unsigned long&>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<unsigned long&>::type>, unsigned long&) in /workspace/clickhouse
2022.08.08 03:55:28.361790 [ 448 ] {} <Fatal> BaseDaemon: 8.1. inlined from ./build_docker/../src/Common/Allocator.h:265: Allocator<false, false>::checkSize(unsigned long)
2022.08.08 03:55:28.361824 [ 448 ] {} <Fatal> BaseDaemon: 8. ../src/Common/Allocator.h:94: Allocator<false, false>::alloc(unsigned long, unsigned long) in /workspace/clickhouse
2022.08.08 03:55:28.373590 [ 448 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Common/PODArray.h:130: void DB::PODArrayBase<1ul, 4096ul, Allocator<false, false>, 15ul, 16ul>::alloc<>(unsigned long) in /workspace/clickhouse
2022.08.08 03:55:28.383622 [ 448 ] {} <Fatal> BaseDaemon: 10.1. inlined from ./build_docker/../src/Common/PODArray.h:260: DB::PODArrayBase<1ul, 4096ul, Allocator<false, false>, 15ul, 16ul>::resize_assume_reserved(unsigned long)
2022.08.08 03:55:28.383657 [ 448 ] {} <Fatal> BaseDaemon: 10. ../src/Common/PODArray.h:248: void DB::PODArrayBase<1ul, 4096ul, Allocator<false, false>, 15ul, 16ul>::resize<>(unsigned long) in /workspace/clickhouse
2022.08.08 03:55:29.341624 [ 448 ] {} <Fatal> BaseDaemon: 11. COW<DB::IColumn>::immutable_ptr<DB::IColumn> DB::FunctionDecrypt<(anonymous namespace)::DecryptImpl>::doDecryptImpl<(OpenSSLDetails::CipherMode)2>(evp_cipher_st const*, unsigned long, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&) in /workspace/clickhouse
2022.08.08 03:55:30.268248 [ 448 ] {} <Fatal> BaseDaemon: 12. DB::FunctionDecrypt<(anonymous namespace)::DecryptImpl>::doDecrypt(evp_cipher_st const*, unsigned long, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&) in /workspace/clickhouse
2022.08.08 03:55:31.194569 [ 448 ] {} <Fatal> BaseDaemon: 13. DB::FunctionDecrypt<(anonymous namespace)::DecryptImpl>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const in /workspace/clickhouse
2022.08.08 03:55:32.120728 [ 448 ] {} <Fatal> BaseDaemon: 14. DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const in /workspace/clickhouse
2022.08.08 03:55:33.047053 [ 448 ] {} <Fatal> BaseDaemon: 15. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const in /workspace/clickhouse
2022.08.08 03:55:33.058381 [ 448 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const in /workspace/clickhouse
2022.08.08 03:55:33.069354 [ 448 ] {} <Fatal> BaseDaemon: 17.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:200: boost::intrusive_ptr<DB::IColumn const>::operator->() const
2022.08.08 03:55:33.069382 [ 448 ] {} <Fatal> BaseDaemon: 17. ../src/Functions/IFunction.cpp:165: DB::IExecutableFunction::defaultImplementationForConstantArguments(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const in /workspace/clickhouse
2022.08.08 03:55:33.080443 [ 448 ] {} <Fatal> BaseDaemon: 18.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/detail/operator_bool.hpp:14: boost::intrusive_ptr<DB::IColumn const>::operator bool() const
2022.08.08 03:55:33.080470 [ 448 ] {} <Fatal> BaseDaemon: 18. ../src/Functions/IFunction.cpp:239: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const in /workspace/clickhouse
2022.08.08 03:55:33.092187 [ 448 ] {} <Fatal> BaseDaemon: 19.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: intrusive_ptr
2022.08.08 03:55:33.092216 [ 448 ] {} <Fatal> BaseDaemon: 19.2. inlined from ../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:122: boost::intrusive_ptr<DB::IColumn const>::operator=(boost::intrusive_ptr<DB::IColumn const>&&)
2022.08.08 03:55:33.092235 [ 448 ] {} <Fatal> BaseDaemon: 19.3. inlined from ../src/Common/COW.h:136: COW<DB::IColumn>::immutable_ptr<DB::IColumn>::operator=(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&&)
2022.08.08 03:55:33.092250 [ 448 ] {} <Fatal> BaseDaemon: 19. ../src/Functions/IFunction.cpp:303: DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const in /workspace/clickhouse
2022.08.08 03:55:33.104346 [ 448 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const in /workspace/clickhouse
2022.08.08 03:55:33.268935 [ 448 ] {} <Fatal> BaseDaemon: 21. ./build_docker/../src/Interpreters/ActionsDAG.cpp:0: DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) in /workspace/clickhouse
2022.08.08 03:55:33.317941 [ 448 ] {} <Fatal> BaseDaemon: 22. ./build_docker/../src/Interpreters/ActionsVisitor.cpp:0: DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) in /workspace/clickhouse
2022.08.08 03:55:33.369927 [ 448 ] {} <Fatal> BaseDaemon: 23.1. inlined from ./build_docker/../contrib/libcxx/include/string:1445: std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__is_long() const
2022.08.08 03:55:33.369962 [ 448 ] {} <Fatal> BaseDaemon: 23.2. inlined from ../contrib/libcxx/include/string:2231: ~basic_string
2022.08.08 03:55:33.369980 [ 448 ] {} <Fatal> BaseDaemon: 23. ../src/Interpreters/ActionsVisitor.h:185: DB::ActionsMatcher::Data::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) in /workspace/clickhouse
2022.08.08 03:55:33.419598 [ 448 ] {} <Fatal> BaseDaemon: 24. ./build_docker/../src/Interpreters/ActionsVisitor.cpp:0: DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /workspace/clickhouse
2022.08.08 03:55:33.470536 [ 448 ] {} <Fatal> BaseDaemon: 25. ./build_docker/../contrib/libcxx/include/vector:0: DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) in /workspace/clickhouse
2022.08.08 03:55:33.525620 [ 448 ] {} <Fatal> BaseDaemon: 26. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:43: DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) in /workspace/clickhouse
2022.08.08 03:55:33.580978 [ 448 ] {} <Fatal> BaseDaemon: 27.1. inlined from ./build_docker/../src/Interpreters/ActionsVisitor.h:190: DB::ActionsMatcher::Data::getActions()
2022.08.08 03:55:33.581005 [ 448 ] {} <Fatal> BaseDaemon: 27. ../src/Interpreters/ExpressionAnalyzer.cpp:552: DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) in /workspace/clickhouse
2022.08.08 03:55:33.642058 [ 448 ] {} <Fatal> BaseDaemon: 28. ./build_docker/../src/Interpreters/ExpressionAnalyzer.cpp:0: DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) in /workspace/clickhouse
2022.08.08 03:55:33.705421 [ 448 ] {} <Fatal> BaseDaemon: 29. ./build_docker/../src/Interpreters/ExpressionAnalyzer.cpp:0: DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) in /workspace/clickhouse
2022.08.08 03:55:33.788633 [ 448 ] {} <Fatal> BaseDaemon: 30. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:792: DB::InterpreterSelectQuery::getSampleBlockImpl() in /workspace/clickhouse
2022.08.08 03:55:33.868042 [ 448 ] {} <Fatal> BaseDaemon: 31. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:633: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >)::$_1::operator()(bool) const in /workspace/clickhouse
2022.08.08 03:55:33.945759 [ 448 ] {} <Fatal> BaseDaemon: 32. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) in /workspace/clickhouse
2022.08.08 03:55:34.023929 [ 448 ] {} <Fatal> BaseDaemon: 33. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:191: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) in /workspace/clickhouse
2022.08.08 03:55:34.057693 [ 448 ] {} <Fatal> BaseDaemon: 34. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:725: std::__1::__unique_if<DB::InterpreterSelectQuery>::__unique_single std::__1::make_unique<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&>(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) in /workspace/clickhouse
2022.08.08 03:55:34.085100 [ 448 ] {} <Fatal> BaseDaemon: 35.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/compressed_pair.h:48: __compressed_pair_elem<DB::InterpreterSelectQuery *, void>
2022.08.08 03:55:34.085129 [ 448 ] {} <Fatal> BaseDaemon: 35.2. inlined from ../contrib/libcxx/include/__memory/compressed_pair.h:130: __compressed_pair<DB::InterpreterSelectQuery *, std::__1::default_delete<DB::InterpreterSelectQuery> >
2022.08.08 03:55:34.085145 [ 448 ] {} <Fatal> BaseDaemon: 35.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:220: unique_ptr<DB::InterpreterSelectQuery, std::__1::default_delete<DB::InterpreterSelectQuery>, void, void>
2022.08.08 03:55:34.085161 [ 448 ] {} <Fatal> BaseDaemon: 35. ../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:244: DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) in /workspace/clickhouse
2022.08.08 03:55:34.111710 [ 448 ] {} <Fatal> BaseDaemon: 36. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) in /workspace/clickhouse
2022.08.08 03:55:34.124925 [ 448 ] {} <Fatal> BaseDaemon: 37. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:0: std::__1::__unique_if<DB::InterpreterSelectWithUnionQuery>::__unique_single std::__1::make_unique<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&>(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&) in /workspace/clickhouse
2022.08.08 03:55:34.136365 [ 448 ] {} <Fatal> BaseDaemon: 38.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/compressed_pair.h:48: __compressed_pair_elem<DB::InterpreterSelectWithUnionQuery *, void>
2022.08.08 03:55:34.136391 [ 448 ] {} <Fatal> BaseDaemon: 38.2. inlined from ../contrib/libcxx/include/__memory/compressed_pair.h:130: __compressed_pair<DB::InterpreterSelectWithUnionQuery *, std::__1::default_delete<DB::InterpreterSelectWithUnionQuery> >
2022.08.08 03:55:34.136408 [ 448 ] {} <Fatal> BaseDaemon: 38.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:220: unique_ptr<DB::InterpreterSelectWithUnionQuery, std::__1::default_delete<DB::InterpreterSelectWithUnionQuery>, void, void>
2022.08.08 03:55:34.136422 [ 448 ] {} <Fatal> BaseDaemon: 38. ../src/Interpreters/InterpreterFactory.cpp:130: DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) in /workspace/clickhouse
2022.08.08 03:55:34.180086 [ 448 ] {} <Fatal> BaseDaemon: 39. ./build_docker/../src/Interpreters/executeQuery.cpp:648: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) in /workspace/clickhouse
2022.08.08 03:55:34.227023 [ 448 ] {} <Fatal> BaseDaemon: 40. ./build_docker/../src/Interpreters/executeQuery.cpp:1085: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) in /workspace/clickhouse
2022.08.08 03:55:34.265480 [ 448 ] {} <Fatal> BaseDaemon: 41. ./build_docker/../src/Server/TCPHandler.cpp:336: DB::TCPHandler::runImpl() in /workspace/clickhouse
2022.08.08 03:55:34.312267 [ 448 ] {} <Fatal> BaseDaemon: 42. ./build_docker/../src/Server/TCPHandler.cpp:1827: DB::TCPHandler::run() in /workspace/clickhouse
2022.08.08 03:55:34.316194 [ 448 ] {} <Fatal> BaseDaemon: 43. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() in /workspace/clickhouse
2022.08.08 03:55:34.322451 [ 448 ] {} <Fatal> BaseDaemon: 44.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()(Poco::Net::TCPServerConnection*) const
2022.08.08 03:55:34.322480 [ 448 ] {} <Fatal> BaseDaemon: 44.2. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:315: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection> >::reset(Poco::Net::TCPServerConnection*)
2022.08.08 03:55:34.322496 [ 448 ] {} <Fatal> BaseDaemon: 44.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:269: ~unique_ptr
2022.08.08 03:55:34.322510 [ 448 ] {} <Fatal> BaseDaemon: 44. ../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() in /workspace/clickhouse
2022.08.08 03:55:34.635150 [ 448 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: FA761F7044D0D7A2EB5DF549555410CC)
```
cc: @Enmk | https://github.com/ClickHouse/ClickHouse/issues/39987 | https://github.com/ClickHouse/ClickHouse/pull/40194 | cb75da8e4aa08171a9e41bbeb16edebb4305d64b | 3d0948b77ef12a6bce1551c8b7e3f6869a3cfd84 | "2022-08-08T12:59:53Z" | c++ | "2022-08-14T06:46:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,960 | ["programs/server/play.html"] | Play UI regression: no progress indicator. | It does not show.
How to reproduce:
`SELECT sleep(1), rand()` | https://github.com/ClickHouse/ClickHouse/issues/39960 | https://github.com/ClickHouse/ClickHouse/pull/39961 | 2ba87b1b8149d12dfa99d11e0ee0e0cc7ff70f16 | 738c94a9e0a4f0c38e0cfe00fa7eb250d4563f59 | "2022-08-08T00:31:04Z" | c++ | "2022-08-08T05:07:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,957 | ["programs/server/play.html"] | Play UI: non pixel-perfect on iPad | **Describe the issue**
Slightly misaligned controls.
Need to set margin: 0, border-radius: 0. | https://github.com/ClickHouse/ClickHouse/issues/39957 | https://github.com/ClickHouse/ClickHouse/pull/39961 | 2ba87b1b8149d12dfa99d11e0ee0e0cc7ff70f16 | 738c94a9e0a4f0c38e0cfe00fa7eb250d4563f59 | "2022-08-07T19:17:55Z" | c++ | "2022-08-08T05:07:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,949 | ["CMakeLists.txt", "cmake/warnings.cmake"] | Cannot build with clang-16 latest commit | Hello,
When I try to build the project with clang-16. I got the error
```ClickHouse/contrib/magic_enum/include/magic_enum.hpp:448:41: error: no matching function for call to 'is_valid'```
And it's very likely related to this [issue](https://github.com/Neargye/magic_enum/issues/204).
More details:
```
-- Cross-compiling for target:
-- Using compiler:
clang version 16.0.0 (https://github.com/llvm/llvm-project.git e21202dac18ed7f718d26a0e131f96b399b4891c)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/local/bin
-- Using linker: /usr/bin/ld.lld
-- Using archiver: /usr/local/bin/llvm-ar
-- Using ranlib: /usr/local/bin/llvm-ranlib
-- Using install-name-tool: /usr/local/bin/llvm-install-name-tool
-- Using objcopy: /usr/local/bin/llvm-objcopy
-- Using strip: /usr/local/bin/llvm-strip
-- Using ccache: /usr/bin/ccache (version 3.7.7)
-- HEAD's commit hash f52b6748db0c5076ae3a53390d75dbc71aa88b6a
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
-- CMAKE_BUILD_TYPE is not set, set to default = RelWithDebInfo
-- CMAKE_BUILD_TYPE: RelWithDebInfo
-- Adding .gdb-index via --gdb-index linker option.
-- No official build: A checksum hash will not be added to the clickhouse executable
-- Default libraries: -nodefaultlibs -lgcc -lc -lm -lrt -lpthread -ldl
-- Some symbols from glibc will be replaced for compatibility
-- Using libunwind: unwind
-- Using exception handler: unwind
-- Unit tests are enabled
-- Building for: Linux x86_64 ;
USE_STATIC_LIBRARIES=ON
SPLIT_SHARED_LIBRARIES=OFF
-- Adding contrib module miniselect (configuring with miniselect-cmake)
-- Adding contrib module pdqsort (configuring with pdqsort-cmake)
-- Adding contrib module sparsehash-c11 (configuring with sparsehash-c11-cmake)
-- Adding contrib module abseil-cpp (configuring with abseil-cpp-cmake)
-- Adding contrib module magic_enum (configuring with magic-enum-cmake)
-- Adding contrib module boost (configuring with boost-cmake)
-- Adding contrib module cctz (configuring with cctz-cmake)
-- Packaging with tzdata version: 2022a
-- Adding contrib module consistent-hashing (configuring with consistent-hashing)
-- Adding contrib module dragonbox (configuring with dragonbox-cmake)
-- Adding contrib module vectorscan (configuring with vectorscan-cmake)
-- Adding contrib module jemalloc (configuring with jemalloc-cmake)
-- jemalloc malloc_conf: percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000
-- Adding contrib module libcpuid (configuring with libcpuid-cmake)
-- Adding contrib module libdivide (configuring with libdivide)
-- Adding contrib module libmetrohash (configuring with libmetrohash)
-- Adding contrib module lz4 (configuring with lz4-cmake)
-- Adding contrib module murmurhash (configuring with murmurhash)
-- Adding contrib module replxx (configuring with replxx-cmake)
-- Adding contrib module unixodbc (configuring with unixodbc-cmake)
-- Adding contrib module nanodbc (configuring with nanodbc-cmake)
-- Adding contrib module capnproto (configuring with capnproto-cmake)
-- Adding contrib module yaml-cpp (configuring with yaml-cpp-cmake)
-- Adding contrib module re2 (configuring with re2-cmake)
-- Adding contrib module xz (configuring with xz-cmake)
-- Adding contrib module brotli (configuring with brotli-cmake)
-- Adding contrib module double-conversion (configuring with double-conversion-cmake)
-- Adding contrib module boringssl (configuring with boringssl-cmake)
-- Adding contrib module poco (configuring with poco-cmake)
-- Using Poco::Crypto
-- Using Poco::Data::ODBC
-- Adding contrib module croaring (configuring with croaring-cmake)
-- Adding contrib module zstd (configuring with zstd-cmake)
-- ZSTD VERSION 1.5.0
-- Adding contrib module zlib-ng (configuring with zlib-ng-cmake)
-- Adding contrib module bzip2 (configuring with bzip2-cmake)
-- Adding contrib module minizip-ng (configuring with minizip-ng-cmake)
-- Adding contrib module snappy (configuring with snappy-cmake)
-- Adding contrib module rocksdb (configuring with rocksdb-cmake)
-- Adding contrib module thrift (configuring with thrift-cmake)
-- Adding contrib module arrow (configuring with arrow-cmake)
-- Using toolchain file: /home/hanzhou/repos/ClickHouse/cmake/linux/toolchain-x86_64.cmake.
-- CMAKE_CXX_FLAGS: --gcc-toolchain=/home/hanzhou/repos/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -Xclang -fuse-ctor-homing -fsized-deallocation -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -mavx -fasynchronous-unwind-tables -ffile-prefix-map=/home/hanzhou/repos/ClickHouse=. -falign-functions=32 -mbranches-within-32B-boundaries -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -w
-- Proceeding with version: 1.12.0.372
-- Adding contrib module avro (configuring with avro-cmake)
-- Adding contrib module protobuf (configuring with protobuf-cmake)
-- Adding contrib module openldap (configuring with openldap-cmake)
-- Adding contrib module grpc (configuring with grpc-cmake)
-- Adding contrib module msgpack-c (configuring with msgpack-c-cmake)
-- Adding contrib module wyhash (configuring with wyhash-cmake)
-- Adding contrib module cityhash102 (configuring with cityhash102)
-- Adding contrib module libfarmhash (configuring with libfarmhash)
-- Adding contrib module icu (configuring with icu-cmake)
-- Adding contrib module h3 (configuring with h3-cmake)
-- Adding contrib module mariadb-connector-c (configuring with mariadb-connector-c-cmake)
-- Adding contrib module googletest (configuring with googletest-cmake)
-- Adding contrib module llvm (configuring with llvm-cmake)
-- LLVM include Directory: /home/hanzhou/repos/ClickHouse/contrib/llvm/llvm/include;/home/hanzhou/repos/ClickHouse/build/contrib/llvm/llvm/include
-- LLVM library Directory: /home/hanzhou/repos/ClickHouse/build/contrib/llvm/llvm
-- LLVM C++ compiler flags:
-- Native target architecture is X86
-- Threads enabled.
-- Doxygen disabled.
-- Go bindings disabled.
-- Ninja version: 1.10.0
-- Could NOT find OCaml (missing: OCAMLFIND OCAML_VERSION OCAML_STDLIB_PATH)
-- OCaml bindings disabled.
-- LLVM host triple: x86_64-unknown-linux-gnu
-- LLVM default target triple: x86_64-unknown-linux-gnu
-- Setting native build dir to /home/hanzhou/repos/ClickHouse/build/contrib/llvm/llvm/NATIVE
-- LLVMHello ignored -- Loadable modules not supported on this platform.
-- Targeting X86
-- Targeting AArch64
-- Adding contrib module libxml2 (configuring with libxml2-cmake)
-- Adding contrib module aws;aws-c-common;aws-c-event-stream;aws-checksums (configuring with aws-s3-cmake)
-- Adding contrib module base64 (configuring with base64-cmake)
-- Adding contrib module simdjson (configuring with simdjson-cmake)
-- Adding contrib module rapidjson (configuring with rapidjson-cmake)
-- Adding contrib module fastops (configuring with fastops-cmake)
-- Adding contrib module libuv (configuring with libuv-cmake)
-- Adding contrib module AMQP-CPP (configuring with amqpcpp-cmake)
-- Adding contrib module cassandra (configuring with cassandra-cmake)
-- Adding contrib module curl (configuring with curl-cmake)
-- Adding contrib module azure (configuring with azure-cmake)
By default, if no option is selected, on POSIX, libcurl transport adapter is used.
-- Adding contrib module sentry-native (configuring with sentry-native-cmake)
-- Adding contrib module fmtlib (configuring with fmtlib-cmake)
-- Adding contrib module krb5 (configuring with krb5-cmake)
-- Adding contrib module cyrus-sasl (configuring with cyrus-sasl-cmake)
-- Adding contrib module libgsasl (configuring with libgsasl-cmake)
-- Adding contrib module librdkafka (configuring with librdkafka-cmake)
-- librdkafka with SASL support
-- librdkafka with SSL support
-- Adding contrib module nats-io (configuring with nats-io-cmake)
-- Adding contrib module libhdfs3 (configuring with libhdfs3-cmake)
-- Enable kerberos for HDFS
-- checking compiler: CLANG
-- Checking whether strerror_r returns an int
-- Checking whether strerror_r returns an int -- yes
-- Adding contrib module hive-metastore (configuring with hive-metastore-cmake)
-- Adding contrib module cppkafka (configuring with cppkafka-cmake)
-- Adding contrib module libpqxx (configuring with libpqxx-cmake)
-- Adding contrib module libpq (configuring with libpq-cmake)
-- Adding contrib module NuRaft (configuring with nuraft-cmake)
-- Adding contrib module fast_float (configuring with fast_float-cmake)
-- Adding contrib module datasketches-cpp (configuring with datasketches-cpp-cmake)
-- Adding contrib module hashidsxx (configuring with hashidsxx-cmake)
-- Adding contrib module libstemmer_c (configuring with libstemmer-c-cmake)
-- Adding contrib module wordnet-blast (configuring with wordnet-blast-cmake)
-- Adding contrib module lemmagen-c (configuring with lemmagen-c-cmake)
-- Adding contrib module nlp-data (configuring with nlp-data-cmake)
-- Adding contrib module cld2 (configuring with cld2-cmake)
-- Adding contrib module sqlite-amalgamation (configuring with sqlite-cmake)
-- Adding contrib module s2geometry (configuring with s2geometry-cmake)
-- Adding contrib module c-ares (configuring with c-ares-cmake)
-- Adding contrib module qpl (configuring with qpl-cmake)
-- Not using QPL
-- Performing Test SUPPORTS_CXXFLAG_frame_larger_than=65536
-- Performing Test SUPPORTS_CXXFLAG_frame_larger_than=65536 - Success
-- Performing Test SUPPORTS_CFLAG_frame_larger_than=65536
-- Performing Test SUPPORTS_CFLAG_frame_larger_than=65536 - Success
-- compiler C = /usr/local/bin/clang --gcc-toolchain=/home/hanzhou/repos/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -fdiagnostics-color=always -Xclang -fuse-ctor-homing -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -mavx -fasynchronous-unwind-tables -ffile-prefix-map=/home/hanzhou/repos/ClickHouse=. -falign-functions=32 -mbranches-within-32B-boundaries -fdiagnostics-absolute-paths -fexperimental-new-pass-manager -Wframe-larger-than=65536 -Weverything -Wpedantic -Wno-zero-length-array -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-padded -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -Wno-thread-safety-negative -O2 -g -DNDEBUG -O3 -g -gdwarf-4 -fno-pie
-- compiler CXX = /usr/local/bin/clang++ --gcc-toolchain=/home/hanzhou/repos/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -Xclang -fuse-ctor-homing -fsized-deallocation -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -mavx -fasynchronous-unwind-tables -ffile-prefix-map=/home/hanzhou/repos/ClickHouse=. -falign-functions=32 -mbranches-within-32B-boundaries -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Wall -Wextra -Wframe-larger-than=65536 -Weverything -Wpedantic -Wno-zero-length-array -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-padded -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -Wno-thread-safety-negative -O2 -g -DNDEBUG -O3 -g -gdwarf-4 -fno-pie
-- LINKER_FLAGS = --gcc-toolchain=/home/hanzhou/repos/ClickHouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld -rdynamic -Wl,--gdb-index -Wl,--build-id=sha1 -no-pie -Wl,-no-pie
-- /home/hanzhou/repos/ClickHouse/src: Have 27316 megabytes of memory.
Limiting concurrent linkers jobs to 7 and compiler jobs to 10 (system has 32 logical cores)
-- Will build ClickHouse 22.8.1.1 revision 54465
-- ClickHouse modes:
-- Server mode: ON
-- Client mode: ON
-- Local mode: ON
-- Self-extracting executable: ON
-- Benchmark mode: ON
-- Extract from config mode: ON
-- Compressor mode: ON
-- Copier mode: ON
-- Format mode: ON
-- Obfuscator mode: ON
-- ODBC bridge mode: ON
-- Library bridge mode: ON
-- ClickHouse install: ON
-- ClickHouse git-import: ON
-- ClickHouse keeper mode: ON
-- ClickHouse keeper-converter mode: ON
-- Clickhouse disks mode: ON
-- ClickHouse su: ON
-- bash_completion will be written to /usr/local/share/bash-completion/completions
-- Target check already exists
-- /home/hanzhou/repos/ClickHouse/utils: Have 27308 megabytes of memory.
Limiting concurrent linkers jobs to 7 and compiler jobs to OFF (system has 32 logical cores)
-- Configuring done
-- Generating done
```
| https://github.com/ClickHouse/ClickHouse/issues/39949 | https://github.com/ClickHouse/ClickHouse/pull/40181 | 4cc2ef5a34314061005734446d064480a537c773 | a716d78264a136211311c92a2f573b4ca23af742 | "2022-08-07T06:45:37Z" | c++ | "2022-08-13T04:06:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,944 | ["src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp"] | Add support for extended (chunked) arrays for Parquet format in ClickHouse please | ### Problem
The S3 function fails with an exception when reading from a Parquet file with large map data.
### Version
22.3.6.5
### Reproduction
Here is the SELECT that triggers the problem.
```
select id, fields_map
from s3(
'my_parquet_file.parquet',
'Parquet',
'id Int64, fields_map Map(String, String)')
;
```
Here is the the schema and data size that triggers problems. (Collected with `parquet-tools`.)
```
optional group fields_map (MAP) = 217 {
repeated group key_value {
required binary key (STRING) = 218;
optional binary value (STRING) = 219;
}
}
fields_map.key_value.value-> Size In Bytes: 13243589 Size In Ratio: 0.20541047
fields_map.key_value.key-> Size In Bytes: 3008860 Size In Ratio: 0.046667963
```
And here is the resulting error.
```
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: While executing ParquetBlockInputFormat: While executing S3. (CANNOT_READ_ALL_DATA)
```
The full stack trace is as follows:
```
chi-ch-39-ch-39-0-0-0 ch 2022.08.06 18:24:41.451777 [ 2347 ] {a7552683-f8dc-4ad4-a838-4555dc944e28} <Error> TCPHandler: Code: 33. DB::Exception: Error while reading Parquet data: NotImplemented: Nested data conversions not implemented for chunked array outputs: While executing ParquetBlockInputFormat: While executing S3. (CANNOT_READ_ALL_DATA), Stack trace (when copying this message, always include the lines below):
chi-ch-39-ch-39-0-0-0 ch
chi-ch-39-ch-39-0-0-0 ch 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb37173a in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 1. DB::ParquetBlockInputFormat::generate() @ 0x169f6702 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 2. DB::ISource::tryGenerate() @ 0x168fc395 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 3. DB::ISource::work() @ 0x168fbf5a in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 4. DB::ExecutionThreadContext::executeTask() @ 0x1691c6e3 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 5. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1691013e in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 6. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x1690f960 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 7. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x1692120e in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 8. DB::StorageS3Source::generate() @ 0x160f5a6c in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 9. DB::ISource::tryGenerate() @ 0x168fc395 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 10. DB::ISource::work() @ 0x168fbf5a in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 11. DB::SourceWithProgress::work() @ 0x16b53862 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 12. DB::ExecutionThreadContext::executeTask() @ 0x1691c6e3 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 13. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1691013e in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 14. ? @ 0x16911aa4 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb418b97 in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 16. ? @ 0xb41c71d in /usr/bin/clickhouse
chi-ch-39-ch-39-0-0-0 ch 17. ? @ 0x7f7e70c2e609 in ?
chi-ch-39-ch-39-0-0-0 ch 18. __clone @ 0x7f7e70b53163 in ?
```
### Workaround?
There does not appear to be a workaround for this problem.
### Additional Information
A similar-looking problem was fixed in Arrow libraries in 2019: https://issues.apache.org/jira/browse/ARROW-4688. This increased the limit from 16MB to 2GB. Other usages of these parquet files outside ClickHouse don't encounter read issues, so maybe the ClickHouse library is out of date? | https://github.com/ClickHouse/ClickHouse/issues/39944 | https://github.com/ClickHouse/ClickHouse/pull/40485 | 2279012ddf0be4748aeb773cd0e4f4ba7ed4a380 | f53aa86a2035cade10abbb78a695d13831b309f0 | "2022-08-06T18:49:04Z" | c++ | "2022-09-01T17:40:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,915 | ["src/Storages/MergeTree/DataPartsExchange.cpp", "tests/integration/test_fetch_memory_usage/__init__.py", "tests/integration/test_fetch_memory_usage/configs/config.xml", "tests/integration/test_fetch_memory_usage/test.py"] | Do replicated fetches use up RAM and how to prevent it? | We have a big replicated table (~15TB/server compressed) with two replicas. When I tried to add two more replicas on new servers, they started to restart because of OOM:

I tried to set the `max_replicated_fetches_network_bandwidth` setting on the new replicas but it didn't help; the RAM usage continues to grow, although more slowly, until it eventually blows up again:

I was looking for what's causing it and it appears Clickhouse is fetching the parts into memory first, as the numbers of the below query correlate with the RAM usage reported by the system:
```
SELECT
database,
table,
formatReadableSize(sum(total_size_bytes_compressed)) AS total,
formatReadableSize(sum(bytes_read_compressed)) AS read
FROM system.replicated_fetches
GROUP BY
database,
table
Query id: f7d90b30-04a8-441f-a809-ed8c9f8fdf69
ββdatabaseβββββββββββββ¬βtableβββββββββββββ¬βtotalβββββ¬βreadββββββ
β xxxxxxxxxx_20210401 β xxxxxxxxxxxxxxxx β 2.29 TiB β 1.17 TiB β
βββββββββββββββββββββββ΄βββββββββββββββββββ΄βββββββββββ΄βββββββββββ
1 row in set. Elapsed: 0.004 sec.
```
I can't see any setting that would limit the RAM usage by replicated fetches, and it's also quite unexpected that RAM is used instead of writing the replicated parts straight to the disk. Am I missing something? | https://github.com/ClickHouse/ClickHouse/issues/39915 | https://github.com/ClickHouse/ClickHouse/pull/39990 | 99b9e85a8fb4fce1f6bea579a6588ca9e51b280b | fdb1c2545f6acda414510e510bb537d719f162d0 | "2022-08-05T05:08:01Z" | c++ | "2022-08-09T19:10:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,907 | ["src/Storages/StorageFile.cpp", "tests/queries/0_stateless/02377_fix_file_virtual_column.reference", "tests/queries/0_stateless/02377_fix_file_virtual_column.sql"] | Logical error: Invalid number of columns in chunk pushed to OutputPort (when selecting only virtual columns from File) | ```
SELECT
_file,
_path
FROM file('exists.csv', 'CSVWithNames')
Query id: 8bff0d32-cad0-4832-907a-be39f13ac49c
0 rows in set. Elapsed: 0.099 sec.
Received exception from server (version 22.7.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of columns in chunk pushed to OutputPort. Expected 2, found 4
Header: _path LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))), _file LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1)))
Chunk: ColumnLowCardinality(size = 7385, UInt8(size = 7385), ColumnUnique(size = 1, String(size = 1))) ColumnLowCardinality(size = 7385, UInt8(size = 7385), ColumnUnique(size = 1, String(size = 1))) ColumnLowCardinality(size = 7385, UInt8(size = 7385), ColumnUnique(size = 2, String(size = 2))) ColumnLowCardinality(size = 7385, UInt8(size = 7385), ColumnUnique(size = 2, String(size = 2)))
. (LOGICAL_ERROR)
```
Version 22.7.1.2484
| https://github.com/ClickHouse/ClickHouse/issues/39907 | https://github.com/ClickHouse/ClickHouse/pull/39943 | a120452c3d41e17822d3bf198f107077602618f2 | a1ecbefcdbf397b8945ab1546a1338f360df912c | "2022-08-04T22:12:56Z" | c++ | "2022-08-09T08:57:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,903 | ["src/Client/Connection.cpp", "src/Client/Connection.h", "src/Core/ProtocolDefines.h", "src/Server/TCPHandler.cpp", "src/Server/TCPHandler.h", "tests/integration/test_distributed_inter_server_secret/configs/remote_servers.xml", "tests/integration/test_distributed_inter_server_secret/configs/remote_servers_backward.xml", "tests/integration/test_distributed_inter_server_secret/test.py"] | Salt in "interserver mode" is useless and does not protect from replay attack | "Interserver mode" was introduced in #13156. The idea is great, however, the implementation is questionable. In particular, `salt` is generated on client and sent to server (see `receiveClusterNameAndSalt()`). Seems like it was supposed to protect from replay attack, but since salt is controlled by client, we cannot rely on it, which makes salt completely useless. Maybe we should re-implement it a proper way (server should ask client to calculate hash with specific salt generated by server), switch to "interserver move v2" and remove current implementation.
cc: @azat, @vitlibar | https://github.com/ClickHouse/ClickHouse/issues/39903 | https://github.com/ClickHouse/ClickHouse/pull/47213 | 4337a3161a67c27543039849e46be59325149c3a | 6a653060ff20b2e7659931ca91c1cc02a7df1030 | "2022-08-04T15:27:38Z" | c++ | "2023-03-15T13:26:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,896 | ["src/Storages/MergeTree/MutateTask.cpp"] | Update fails on CH22.3-lts after materialize index for wide part, but works on master. | **Describe the unexpected behaviour**
An update fails after materialize index is called for wide part on CH 22.3.9, but works fine on master [master just create hard link for the skipping index, no recalculated on updated column, not correct too].
Should we fix this issue on 22.3-lts?
**How to reproduce**
drop table if exists t_light;
create table t_light(a int, b int, c int, index i_c(b) type minmax granularity 4) engine = MergeTree order by a settings min_bytes_for_wide_part=0;
INSERT INTO t_light SELECT number, number, number FROM numbers(10);
set mutations_sync=1;
alter table t_light MATERIALIZE INDEX i_c; <<< materialize index i_c on column b.
alter table t_light update b=-1 where a<3; <<< index i_c should be recalculated.
* Which ClickHouse server version to use
ck 22.3.9
**Expected behavior**
The alter table ... update runs successfully.
**Error message and/or stacktrace**
Received exception from server (version 22.3.9):
Code: 341. DB::Exception: Received from localhost:9000. DB::Exception: Exception happened during execution of mutation 'mutation_3.txt' with part 'all_1_1_0_2' reason: 'Code: 76. DB::ErrnoException: Cannot open file /var/lib/clickhouse/store/4c1/4c15a019-28b1-4776-8e92-401f54b2bcad/tmp_mut_all_1_1_0_3/skp_idx_i_c.idx2, errno: 13, strerror: Permission denied. (CANNOT_OPEN_FILE) (version 22.3.9.19 (official build))'. This error maybe retryable or not. In case of unretryable error, mutation can be killed with KILL MUTATION query. (UNFINISHED)
== stacktrace ===
2022.08.04 18:37:21.657377 [ 121652 ] {4c15a019-28b1-4776-8e92-401f54b2bcad::all_1_1_0_3} <Error> virtual bool DB::MutatePlainMergeTreeTask::executeStep(): Code: 76. DB::ErrnoException: Cannot open file /var/lib/clickhouse/store/4c1/4c15a019-28b1-4776-8e92-401f54b2bcad/tmp_mut_all_1_1_0_3/skp_idx_i_c.idx2, errno: 13, strerror: Permission denied. (CANNOT_OPEN_FILE), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb3744fa in /usr/bin/clickhouse
1. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0xb3758ea in /usr/bin/clickhouse
2. DB::WriteBufferFromFile::WriteBufferFromFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, int, unsigned int, char*, unsigned long) @ 0xb4abc1a in /usr/bin/clickhouse
3. DB::DiskLocal::writeFile(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, DB::WriteMode) @ 0x1534abf7 in /usr/bin/clickhouse
4. DB::MergeTreeDataPartWriterOnDisk::Stream::Stream(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IDisk>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::ICompressionCodec> const&, unsigned long) @ 0x16392a23 in /usr/bin/clickhouse
5. DB::MergeTreeDataPartWriterOnDisk::initSkipIndices() @ 0x16393d93 in /usr/bin/clickhouse
6. DB::MergeTreeDataPartWriterOnDisk::MergeTreeDataPartWriterOnDisk(std::__1::shared_ptr<DB::IMergeTreeDataPart const> const&, DB::NamesAndTypesList const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeIndex const> > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::ICompressionCodec> const&, DB::MergeTreeWriterSettings const&, DB::MergeTreeIndexGranularity const&) @ 0x1639346a in /usr/bin/clickhouse
7. DB::MergeTreeDataPartWriterWide::MergeTreeDataPartWriterWide(std::__1::shared_ptr<DB::IMergeTreeDataPart const> const&, DB::NamesAndTypesList const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeIndex const> > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::ICompressionCodec> const&, DB::MergeTreeWriterSettings const&, DB::MergeTreeIndexGranularity const&) @ 0x163961af in /usr/bin/clickhouse
8. DB::MergeTreeDataPartWide::getWriter(DB::NamesAndTypesList const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeIndex const> > > const&, std::__1::shared_ptr<DB::ICompressionCodec> const&, DB::MergeTreeWriterSettings const&, DB::MergeTreeIndexGranularity const&) const @ 0x16386a82 in /usr/bin/clickhouse
9. DB::MergedColumnOnlyOutputStream::MergedColumnOnlyOutputStream(std::__1::shared_ptr<DB::IMergeTreeDataPart const> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::Block const&, std::__1::shared_ptr<DB::ICompressionCodec>, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeIndex const> > > const&, std::__1::set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*, DB::MergeTreeIndexGranularity const&, DB::MergeTreeIndexGranularityInfo const*) @ 0x164729cd in /usr/bin/clickhouse
10. void std::__1::allocator_traits<std::__1::allocator<DB::MergedColumnOnlyOutputStream> >::__construct<DB::MergedColumnOnlyOutputStream, std::__1::shared_ptr<DB::IMergeTreeDataPart>&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>&, DB::Block&, std::__1::shared_ptr<DB::ICompressionCodec>&, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeIndex const> > >, std::nullptr_t, DB::MergeTreeIndexGranularity const&, DB::MergeTreeIndexGranularityInfo const*>(std::__1::integral_constant<bool, true>, std::__1::allocator<DB::MergedColumnOnlyOutputStream>&, DB::MergedColumnOnlyOutputStream*, std::__1::shared_ptr<DB::IMergeTreeDataPart>&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>&, DB::Block&, std::__1::shared_ptr<DB::ICompressionCodec>&, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeIndex const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeIndex const> > >&&, std::nullptr_t&&, DB::MergeTreeIndexGranularity const&, DB::MergeTreeIndexGranularityInfo const*&&) @ 0x16491d01 in /usr/bin/clickhouse
11. DB::MutateSomePartColumnsTask::prepare() @ 0x164904b5 in /usr/bin/clickhouse
12. DB::MutateSomePartColumnsTask::executeStep() @ 0x1648f05f in /usr/bin/clickhouse
13. DB::MutatePlainMergeTreeTask::executeStep() @ 0x16478f35 in /usr/bin/clickhouse
14. DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(std::__1::shared_ptr<DB::TaskRuntimeData>) @ 0xb34b7ab in /usr/bin/clickhouse
15. DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::threadFunction() @ 0xb34b3f9 in /usr/bin/clickhouse
16. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb41beca in /usr/bin/clickhouse
17. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xb41e064 in /usr/bin/clickhouse
| https://github.com/ClickHouse/ClickHouse/issues/39896 | https://github.com/ClickHouse/ClickHouse/pull/40095 | c746dcd6449ca300f762757525987624a8dc52e9 | d1051d822c02804292ac1e9486c104247e37418f | "2022-08-04T10:44:12Z" | c++ | "2022-08-11T10:39:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,880 | ["docs/en/sql-reference/functions/string-functions.md", "docs/zh/sql-reference/functions/string-functions.md", "src/Functions/soundex.cpp", "tests/queries/0_stateless/02711_soundex_function.reference", "tests/queries/0_stateless/02711_soundex_function.sql"] | SOUNDEX function | https://mariadb.com/kb/en/soundex/
```sql
mysql> select SOUNDEX('aksel');
+------------------+
| SOUNDEX('aksel') |
+------------------+
| A240 |
+------------------+
1 row in set (0.00 sec)
mysql> select SOUNDEX('axel');
+-----------------+
| SOUNDEX('axel') |
+-----------------+
| A240 |
+-----------------+
1 row in set (0.00 sec)
```
Also interesting https://mariadb.com/kb/en/sounds-like/ | https://github.com/ClickHouse/ClickHouse/issues/39880 | https://github.com/ClickHouse/ClickHouse/pull/48567 | b6c17759f4ca34e1da749d50093a7e97052da564 | 658d6c8870119365a42fbfcc3b6b83f54ba72c70 | "2022-08-03T23:23:41Z" | c++ | "2023-04-13T09:20:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,824 | ["utils/self-extracting-executable/post_build.sh"] | Meet unable to extract decompressor error when building on macos | **Operating system**
Macos
**Cmake version**
cmake version 3.23.3
**Ninja version**
1.11.0
**Compiler name and version**
**Full cmake and/or ninja output**
[18/18] cd /Users/jianzjzhang/Documents/ClickHouse/build/programs/self-extracting && /usr/local/Cellar/cmake/3.23.3/b...&& /Users/jianzjzhang/Documents/ClickHouse/build/utils/self-extracting-executable/compressor clickhouse ../clickhouse
FAILED: programs/self-extracting/CMakeFiles/self-extracting /Users/jianzjzhang/Documents/ClickHouse/build/programs/self-extracting/CMakeFiles/self-extracting
cd /Users/jianzjzhang/Documents/ClickHouse/build/programs/self-extracting && /usr/local/Cellar/cmake/3.23.3/bin/cmake -E remove clickhouse && /Users/jianzjzhang/Documents/ClickHouse/build/utils/self-extracting-executable/compressor clickhouse ../clickhouse
Error: unable to extract decompressor
ninja: build stopped: subcommand failed.
It seems that "set (DECOMPRESSOR "--decompressor=${CMAKE_BINARY_DIR}/utils/self-extracting-executable/decompressor")" should also be set in the else branch in "ClickHouse/programs/self-extracting/CMakeLists.txt"
| https://github.com/ClickHouse/ClickHouse/issues/39824 | https://github.com/ClickHouse/ClickHouse/pull/39843 | 70d97e9393885b8949115827438fde29d5f8a733 | e2a5faede91980b07dc8ff193f008f17d5ba634f | "2022-08-02T13:38:11Z" | c++ | "2022-08-03T02:55:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,818 | ["src/Common/ColumnsHashing.h", "src/Processors/Transforms/IntersectOrExceptTransform.cpp", "tests/queries/0_stateless/02381_intersect_except_const_column.reference", "tests/queries/0_stateless/02381_intersect_except_const_column.sql"] | Wrong result of intersect | ```sql
select 1 from numbers(10) intersect select 1 from numbers(10)
SELECT 1
FROM numbers(10)
INTERSECT
SELECT 1
FROM numbers(10)
Query id: 9f2d11c5-269c-41dc-9b4c-8f1423aafcc7
ββ1ββ
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
βββββ
8 rows in set. Elapsed: 0.002 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/39818 | https://github.com/ClickHouse/ClickHouse/pull/40020 | 17956cb668e2af2f9f19b5618820e9e5c843badd | 4c7222d938e6ebc4b564be9a838c8ce7b5253fd8 | "2022-08-02T09:50:11Z" | c++ | "2022-08-12T12:40:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,807 | ["src/Common/CaresPTRResolver.cpp", "src/Common/DNSPTRResolverProvider.cpp"] | When I upgrade to 22.7.1.2484 from 21.8.14.5, after a few minutes all servers crash.The part of log are as follows. | 
| https://github.com/ClickHouse/ClickHouse/issues/39807 | https://github.com/ClickHouse/ClickHouse/pull/40134 | 18c3ae5b377068fbafd4c0d2185ffd776c9c059e | 4482465ce36b2c128ef86f2f6d85b53370a01aab | "2022-08-02T02:38:26Z" | c++ | "2022-08-16T18:06:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,793 | ["docs/en/sql-reference/functions/other-functions.md", "docs/ru/sql-reference/functions/other-functions.md", "src/Core/Settings.h", "src/Functions/FunctionDateOrDateTimeAddInterval.h", "src/Functions/FunctionNumericPredicate.h", "src/Functions/makeDate.cpp", "src/Functions/throwIf.cpp", "tests/queries/0_stateless/00602_throw_if.reference", "tests/queries/0_stateless/00602_throw_if.sh", "tests/queries/0_stateless/00995_exception_while_insert.sh"] | New function: throwError | Is there any way to manually make CH throw a specific error?
Something like this:
```sql
SELECT throwError(999);
Received exception from server (version 22.2.2):
Code: 999. DB::Exception: KEEPER_EXCEPTION ...
```
PS:
*Although there is a similar function (`throwIf`), it only returns code=395 (FUNCTION_THROW_IF_VALUE_IS_NON_ZERO).*
**Use case**
We're looking something like this so we can simulate errors and take relevant actions agains some errors.
| https://github.com/ClickHouse/ClickHouse/issues/39793 | https://github.com/ClickHouse/ClickHouse/pull/40319 | 68e98e43ed58d08deedfa7776b45b9b441db33a0 | 24615059fbd534bf6c25d48b914b730aa780d910 | "2022-08-01T14:15:57Z" | c++ | "2022-08-19T07:49:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,753 | ["src/Disks/IO/ThreadPoolReader.cpp"] | Linux kernels 5.9..5.10: Cannot read all data in CH 22.7 ( pread_threadpool ) | I created the issue to ease a search of a solution for others.
solution:
```xml
cat /etc/clickhouse-server/users.d/local_filesystem_read_method.xml
<?xml version="1.0" ?>
<clickhouse>
<profiles>
<default>
<local_filesystem_read_method>pread</local_filesystem_read_method>
</default>
</profiles>
</clickhouse>
```
---------------------------
Cannot read from file 37, errno: 1, strerror: Operation not permitted: (while reading column event_date): (while reading from part /var/lib/clickhouse/store/0c4/0c486c57-095c-4343-b0b6-326807f9fc21/202207_494331_555280_2626/ from mark 0 with max_rows_to_read = 65505): While executing MergeTreeThread. (CANNOT_READ_FROM_FILE_DESCRIPTOR) (version 22.7.1.2484 (official build))
CentOS Linux release 7.9.2009 (Core)
3.10.0-1160.71.1.el7.x86_64 #1 SMP Tue Jun 28 15:37:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
---------------------------
Cannot read all data. Bytes read: 262. Bytes expected: 268.: (while reading column date): (while reading from part /var/lib/clickhouse-server/data/myapp/events_shard/202206_22_22_0/ from mark 48 with max_rows_to_read = 65505): While executing MergeTreeThread. (CANNOT_READ_ALL_DATA)
Debian GNU/Linux 11 (bullseye)
5.10.0-16-cloud-arm64 #1 SMP Debian 5.10.127-1 (2022-06-30) aarch64 GNU/Linux
---------------------------
Cannot read all data. Bytes read: 26613. Bytes expected: 47279.: (while reading column fare_amount): (while reading from part /var/lib/clickhouse/store/a2e/a2ee6c32-4979-415d-9ca0-727d9aa0613c/201503_33_38_1/ from mark 24 with max_rows_to_read = 65505): While executing MergeTreeThread. (CANNOT_READ_ALL_DATA)" while running the following query select min(fare_amount) from yellow_taxi for 146 M lines
Ubuntu 20.04.4 LTS
5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
---------------------------
Cannot read all data. Bytes read: 46098. Bytes expected: 62601.: (while reading column imsi): (while reading from
part /data/clickhouse/store/6fe/6fe42505-d6c7-4d87-bb8d-05d49a90c2cc/all_1_160_3/ from mark 0 with max_rows_to_read = 61272): While
executing MergeTreeThread. (CANNOT_READ_ALL_DATA) (version 22.7.1.2033 (official build))
---------------------------
Cannot read all data. Bytes read: 44772. Bytes expected: 47817.: (while reading column STAYNUM): (while reading from part /var/lib/clickhouse/store/40d/40dd58ff-55eb-4877-922d-f797f1db9731/all_7_12_1/ from mark 110 with max_rows_to_read = 24576): While executing MergeTreeThread. (CANNOT_READ_ALL_DATA) (version 22.7.1.2484 (official build))
---------------------------
| https://github.com/ClickHouse/ClickHouse/issues/39753 | https://github.com/ClickHouse/ClickHouse/pull/39800 | 6405439976e8e6e5321230ec3e47dd60e846293c | 2a5b023b0f50aa610f95452cbe9fd2e9d4ace8ca | "2022-07-31T02:21:09Z" | c++ | "2022-08-02T14:06:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,719 | ["src/Client/ClientBase.cpp", "tests/queries/0_stateless/02360_send_logs_level_colors.reference", "tests/queries/0_stateless/02360_send_logs_level_colors.sh"] | `send_log_level` with redirect to stdout may fail | **Describe what's wrong**
Query in clickhouse-client may fail with options `--send_logs_level=trace --server_logs_file=-` (`--server_logs_file=-` means write server logs to stdout).
**How to reproduce**
```
while true; do clickhouse-client -q "select 1" --send_logs_level=trace --server_logs_file=- >log; done
Code: 75. DB::ErrnoException: Cannot write to file (fd = 1), errno: 14, strerror: Bad address: (in query: select 1). (CANNOT_WRITE_TO_FILE_DESCRIPTOR)
Code: 75. DB::ErrnoException: Cannot write to file (fd = 1), errno: 14, strerror: Bad address: (in query: select 1). (CANNOT_WRITE_TO_FILE_DESCRIPTOR)
Code: 75. DB::ErrnoException: Cannot write to file (fd = 1), errno: 14, strerror: Bad address: (in query: select 1). (CANNOT_WRITE_TO_FILE_DESCRIPTOR)
```
Some queries will fail periodically.
| https://github.com/ClickHouse/ClickHouse/issues/39719 | https://github.com/ClickHouse/ClickHouse/pull/39731 | c882bdc88e75b249ecc901ac68145f1a6cf93ed9 | eeb9366010f3d336689dba3ccefc4bc6c0477b69 | "2022-07-29T16:32:41Z" | c++ | "2022-08-01T12:22:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,704 | ["src/Interpreters/Set.cpp", "tests/queries/0_stateless/02374_in_tuple_index.reference", "tests/queries/0_stateless/02374_in_tuple_index.sql"] | IN operator is not working with tuples | **Describe what's wrong**
When adding new values to IN operator with tuples, result set is reduced.
<br class="Apple-interchange-newline">
**Does it reproduce on recent release?**
Probably not working since v22.4.2.1-stable
Still not working on v22.7.1.2484-stable
<br class="Apple-interchange-newline">
**How to reproduce**
```
CREATE TABLE ISSUE_EXAMPLE
(
`THIRDPARTY_ID` String,
`THIRDPARTY_USER_ID` String,
`PLATFORM` LowCardinality(String),
`DATE` Date
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(DATE)
ORDER BY (PLATFORM, THIRDPARTY_USER_ID, THIRDPARTY_ID)
SETTINGS storage_policy = 'nvme', index_granularity = 2048
```
```
INSERT INTO ISSUE_EXAMPLE VALUES ('17843212264024828', 238040827, 'insta', '2014-07-30'), ('17862693304001575', 238040827, 'insta', '2016-07-09')
```
```
SELECT *
FROM ISSUE_EXAMPLE
WHERE (PLATFORM, THIRDPARTY_USER_ID) IN (('insta', '238040827'))
```
That query returns 2 rows
```
SELECT *
FROM ISSUE_EXAMPLE
WHERE (PLATFORM, THIRDPARTY_USER_ID) IN (('insta', '238040827'), ('insta', '22606518861'))
```
That query returns 0 rows instead of 2. Adding new value to IN should not reduce result set. | https://github.com/ClickHouse/ClickHouse/issues/39704 | https://github.com/ClickHouse/ClickHouse/pull/39752 | eeb9366010f3d336689dba3ccefc4bc6c0477b69 | 8a3ec52b5e06f20ccd3472aed7fa440b625ebc0e | "2022-07-29T10:13:51Z" | c++ | "2022-08-01T12:41:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,701 | ["src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp", "tests/queries/0_stateless/02375_rocksdb_with_filters.reference", "tests/queries/0_stateless/02375_rocksdb_with_filters.sh"] | EmbeddedRocksDB does full scan when using `params` | **Describe the situation**
When using parameters (`param`) in the CLI or with the HTTP interface, the EmbeddedRocksDB engine will scan through all the data causing significant slowdown, instead of doing a proper RocksDB key lookup.
**How to reproduce**
setup.sql:
```sql
CREATE TABLE test1 (key String, value String) ENGINE=EmbeddedRocksDB PRIMARY KEY key;
-- do this a few times to exacerbate the issue
INSERT INTO test1 (*) SELECT n.number, randomString(10000) FROM numbers(10000) n;
INSERT INTO test1 (*) SELECT n.number, randomString(10000) FROM numbers(10000) n;
INSERT INTO test1 (*) SELECT n.number, randomString(10000) FROM numbers(10000) n;
```
```
β― clickhouse client "--param_key=5000" --query "SELECT value FROM test1 WHERE key = {key:String} FORMAT JSON" | jq ".statistics"
{
"elapsed": 0.20763,
"rows_read": 10000,
"bytes_read": 100218890
}
β― clickhouse client --query "SELECT value FROM test1 WHERE key = '5000' FORMAT JSON" | jq ".statistics"
{
"elapsed": 0.000519,
"rows_read": 1,
"bytes_read": 10022
}
```
**Version**
22.6.1.231
| https://github.com/ClickHouse/ClickHouse/issues/39701 | https://github.com/ClickHouse/ClickHouse/pull/39757 | 379d8c5c6a2732c4c9e6e87ae347e0e4690975a8 | 9ec27c0ab45c78699f07c7175845358d604d713e | "2022-07-29T09:32:19Z" | c++ | "2022-08-01T10:17:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,693 | ["src/Formats/JSONUtils.cpp", "src/Processors/Formats/Impl/JSONRowOutputFormat.h", "tests/queries/0_stateless/02375_double_escaping_json.reference", "tests/queries/0_stateless/02375_double_escaping_json.sql"] | Inconsistent behavior of back-slash escaping in json format for meta and data | Reproducible on 22.7.1 and 22.6.1
**How to reproduce**
SELECT 1 AS `\\"ph"`
FORMAT JSON
Query id: 0159c301-b91e-4b59-b8e1-840056d18ffb
```
{
"meta":
[
{
"name": "\\\\\\\"ph\\\"",
"type": "UInt8"
}
],
"data":
[
{
"\\\"ph\"": 1
}
],
"rows": 1,
"statistics":
{
"elapsed": 0.000840535,
"rows_read": 1,
"bytes_read": 1
}
}
```
Expected result: column name in meta and data should be the same.
Actual result: They are not.
It used to work on 22.3.15 - Even though it has another issue back then. | https://github.com/ClickHouse/ClickHouse/issues/39693 | https://github.com/ClickHouse/ClickHouse/pull/39747 | c9e685030625f749c7564fc861d8aa1aab8f5e60 | d259c4fa6c4aedc93f5021e4cf8091a458da98c6 | "2022-07-28T23:02:17Z" | c++ | "2022-07-31T09:28:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,678 | ["src/Processors/Formats/Impl/ArrowColumnToCHColumn.cpp", "tests/queries/0_stateless/02381_arrow_dict_of_nullable_string_to_lc.reference", "tests/queries/0_stateless/02381_arrow_dict_of_nullable_string_to_lc.sh", "tests/queries/0_stateless/02381_arrow_dict_to_lc.reference", "tests/queries/0_stateless/02381_arrow_dict_to_lc.sh"] | Dictionary in Arrow format issue | ```py
## writing ArrowStream file from python
import pyarrow as pa
data = [
pa.array([1, 2, 3, 4, 5]),
pa.array(["one", "two", "three", "four", "five"]).dictionary_encode(),
pa.array([1, 2, 3, 4, 5]).dictionary_encode(),
pa.array([True, False, True, True, True])
]
batch = pa.record_batch(data, names=['id', 'lc_nullable', 'lc_int_nullable', 'bool_nullable'])
writer = pa.ipc.new_stream("test4.arrows", batch.schema)
writer.write_batch(batch)
writer.close()
```
```
clickhouse-local --query='SELECT * FROM table FORMAT TSVWithNamesAndTypes' --stacktrace --input-format=ArrowStream < test4.arrows
id lc_nullable lc_int_nullable bool_nullable
Nullable(Int64) LowCardinality(Nullable(String)) LowCardinality(Nullable(Int64)) Nullable(UInt8)
1 0 1
2 one 1 0
3 two 2 1
4 three 3 1
5 four 4 1
```
0 in LowCardianality dictionary in ClickHouse have special meaning (null/default), while in Arrow it's a regular values.
Probably remapping needed... :| | https://github.com/ClickHouse/ClickHouse/issues/39678 | https://github.com/ClickHouse/ClickHouse/pull/40037 | 1196d1d11785491db2d7ec701ad55259f4a0fc4f | a006ff0a43ed9302253819660362c1c9a853f9d9 | "2022-07-28T14:28:58Z" | c++ | "2022-08-10T09:40:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,635 | ["src/Processors/Formats/Impl/ArrowColumnToCHColumn.cpp", "tests/queries/0_stateless/02381_arrow_dict_of_nullable_string_to_lc.reference", "tests/queries/0_stateless/02381_arrow_dict_of_nullable_string_to_lc.sh", "tests/queries/0_stateless/02381_arrow_dict_to_lc.reference", "tests/queries/0_stateless/02381_arrow_dict_to_lc.sh"] | Nested type LowCardinality(String) cannot be inside Nullable type: While executing ArrowBlockInputFormat | ```
clickhouse-local -mn --query='SELECT * FROM table' --stacktrace --input-format=ArrowStream < test.arrow
```
```
Code: 43. DB::Exception: Nested type LowCardinality(String) cannot be inside Nullable type: While executing ArrowBlockInputFormat: While executing File. (ILLEGAL_TYPE_OF_ARGUMENT), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba2879a in /usr/bin/clickhouse
1. DB::DataTypeNullable::DataTypeNullable(std::__1::shared_ptr<DB::IDataType const> const&) @ 0x160bb75f in /usr/bin/clickhouse
2. std::__1::shared_ptr<DB::DataTypeNullable> std::__1::allocate_shared<DB::DataTypeNullable, std::__1::allocator<DB::DataTypeNullable>, std::__1::shared_ptr<DB::IDataType const>, void>(std::__1::allocator<DB::DataTypeNullable> const&, std::__1::shared_ptr<DB::IDataType const>&&) @ 0x160f56e8 in /usr/bin/clickhouse
3. DB::readColumnFromArrowColumn(std::__1::shared_ptr<arrow::ChunkedArray>&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::ColumnWithTypeAndName>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::shared_ptr<DB::ColumnWithTypeAndName> > > >&, bool, bool, bool, bool&) @ 0x17b9ccbc in /usr/bin/clickhouse
4. DB::ArrowColumnToCHColumn::arrowColumnsToCHChunk(DB::Chunk&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<arrow::ChunkedArray>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::shared_ptr<arrow::ChunkedArray> > > >&) @ 0x17ba15a3 in /usr/bin/clickhouse
5. DB::ArrowColumnToCHColumn::arrowTableToCHChunk(DB::Chunk&, std::__1::shared_ptr<arrow::Table>&) @ 0x17ba0ffc in /usr/bin/clickhouse
6. DB::ArrowBlockInputFormat::generate() @ 0x17b8e270 in /usr/bin/clickhouse
7. DB::ISource::tryGenerate() @ 0x17b60b95 in /usr/bin/clickhouse
8. DB::ISource::work() @ 0x17b606e6 in /usr/bin/clickhouse
9. DB::ExecutionThreadContext::executeTask() @ 0x17b7cf0a in /usr/bin/clickhouse
10. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x17b71df0 in /usr/bin/clickhouse
11. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x17b715c0 in /usr/bin/clickhouse
12. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x17b81b43 in /usr/bin/clickhouse
13. DB::StorageFileSource::generate() @ 0x171394d7 in /usr/bin/clickhouse
14. DB::ISource::tryGenerate() @ 0x17b60b95 in /usr/bin/clickhouse
15. DB::ISource::work() @ 0x17b606e6 in /usr/bin/clickhouse
16. DB::ExecutionThreadContext::executeTask() @ 0x17b7cf0a in /usr/bin/clickhouse
17. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x17b71df0 in /usr/bin/clickhouse
18. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x17b70e64 in /usr/bin/clickhouse
19. DB::PipelineExecutor::execute(unsigned long) @ 0x17b70bfd in /usr/bin/clickhouse
20. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x17b80e13 in /usr/bin/clickhouse
21. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbaf5268 in /usr/bin/clickhouse
22. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xbaf85fd in /usr/bin/clickhouse
23. ? @ 0x7fdefda5d609 in ?
24. clone @ 0x7fdefd982133 in ?
```
[test.zip](https://github.com/ClickHouse/ClickHouse/files/9196695/test.zip) | https://github.com/ClickHouse/ClickHouse/issues/39635 | https://github.com/ClickHouse/ClickHouse/pull/40037 | 1196d1d11785491db2d7ec701ad55259f4a0fc4f | a006ff0a43ed9302253819660362c1c9a853f9d9 | "2022-07-27T08:55:41Z" | c++ | "2022-08-10T09:40:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,598 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02388_conversion_from_string_with_datetime64_to_date_and_date32.reference", "tests/queries/0_stateless/02388_conversion_from_string_with_datetime64_to_date_and_date32.sql"] | toDate() function should ignore the time part | Hello!
I've started to use `DateTime64` recently and faced the next issue with `toDate()`/`toDate32()` functions.
This query works fine:
```sql
SELECT toDate('2022-07-26 00:00:00')
Query id: bf64d8a8-018d-4b3e-ae24-f92600a1fa15
ββtoDate('2022-07-26 00:00:00')ββ
β 2022-07-26 β
βββββββββββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.003 sec.
```
But when the string representation of the timestamp has the milliseconds fraction the function does not work:
```sql
SELECT toDate('2022-07-26 00:00:00.000')
Query id: 8b42473a-0ceb-4892-b0e7-6f73f583d4b4
0 rows in set. Elapsed: 0.006 sec.
Received exception from server (version 22.3.8):
Code: 6. DB::Exception: Received from s:9000. DB::Exception: Cannot parse string '2022-07-26 00:00:00.000' as Date: syntax error at position 10 (parsed just '2022-07-26'): While processing toDate('2022-07-26 00:00:00.000'). (CANNOT_PARSE_TEXT)
```
The `toDate32()` doesn't work in both cases.
Yes, you can say I must change the input or use a workaround like this:
```sql
SELECT toDate(toDateTime64('2022-07-26 00:00:00.000', 3, 'UTC')) AS date
```
But it's not convenient and it'd be great if the mentioned functions will ignore the time part.
P.s. ClickHouse server version: `22.3.8.39`
| https://github.com/ClickHouse/ClickHouse/issues/39598 | https://github.com/ClickHouse/ClickHouse/pull/40475 | f68db41c1c330a5066eca9337292cbe713f4bb0b | c9177d2cb3717318b9c8aede7125fc7506b77644 | "2022-07-26T08:47:17Z" | c++ | "2022-08-24T15:33:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,559 | ["contrib/NuRaft"] | Jepsen: invalid analysis | https://s3.amazonaws.com/clickhouse-test-reports/0/73c04b64e35354264a5943b131f879753f3eaaca/clickhouse_keeper_jepsen.html
cc: @antonio2368, @alesapin | https://github.com/ClickHouse/ClickHouse/issues/39559 | https://github.com/ClickHouse/ClickHouse/pull/39609 | ee515b8862a9be0c940ffdef0a56e590b5facdb2 | b1f014f9a65467472244b3c5f55a48a58949f4d8 | "2022-07-25T14:01:17Z" | c++ | "2022-07-28T11:55:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,546 | ["programs/server/Server.cpp", "src/Databases/DatabaseOrdinary.cpp", "src/Databases/DatabaseOrdinary.h", "src/Databases/IDatabase.h", "src/Interpreters/loadMetadata.cpp", "src/Interpreters/loadMetadata.h", "src/Storages/StorageLog.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageStripeLog.cpp", "tests/integration/test_backward_compatibility/test_convert_ordinary.py"] | Mechanism to migrate databases from Ordinary to Atomic engine | Ordinary database engine is deprecated but there is no simple mean to change database engine to Atomic for existing databases.
Possible options:
- ALTER DATABASE MODIFY ENGINE query.
- Flag that triggers migration at ClickHouse startup.
The latter should be easier to implement.
| https://github.com/ClickHouse/ClickHouse/issues/39546 | https://github.com/ClickHouse/ClickHouse/pull/39933 | d23296ef85b615d9ee9f844145d09104bf9e615c | caa270b72abff4dea10739f242ad7f532774aee7 | "2022-07-25T07:32:37Z" | c++ | "2022-08-17T09:40:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,534 | ["programs/obfuscator/Obfuscator.cpp", "tests/queries/1_stateful/00096_obfuscator_save_load.reference", "tests/queries/1_stateful/00096_obfuscator_save_load.sh"] | clickhouse-obfuscator: allow to save and load models. | **Use case**
Generate models once and reuse them.
**Describe the solution you'd like**
Add command line options:
```
--save file - write serialized models on disk after training and before generating the data; the user can set --limit to 0 to only train the models and save them without proceeding to data generation;
--load file - instead of training the models, load the prepared models from the file and start generating the data.
```
Serialization format: one byte for version (equals to 0), then simple binary serialization of all the models; the whole file is compressed with ZSTD level 1 using CompressedWriteBuffer (no need to allow any compression options to choose). | https://github.com/ClickHouse/ClickHouse/issues/39534 | https://github.com/ClickHouse/ClickHouse/pull/39541 | 9254ae1f4316f9ffc5b32cf7abe91f515231f8b1 | 75d0232265a66bb889af7b9c7123a147fcc0f202 | "2022-07-24T18:33:34Z" | c++ | "2022-07-25T18:18:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,525 | ["contrib/nats-io-cmake/CMakeLists.txt", "docker/test/integration/runner/compose/docker_compose_nats.yml", "src/Storages/NATS/NATSConnection.cpp", "src/Storages/NATS/NATSConnection.h", "tests/integration/helpers/cluster.py", "tests/integration/test_storage_nats/nats_certs.sh", "tests/integration/test_storage_nats/test.py"] | NATS is built with no TLS support | I.e.
```sql
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = NATS
SETTINGS nats_url = 'demo.nats.io',
nats_subjects = 'subject1',
nats_format = 'JSONEachRow',
date_time_input_format = 'best_effort';
```
Will fail with "compiled without TLS"
Will add patch that fixes it. | https://github.com/ClickHouse/ClickHouse/issues/39525 | https://github.com/ClickHouse/ClickHouse/pull/39527 | cb7f072fe88a66387a0f990920db626c1774741f | 0921548a3731b84bbd0aad92a668dc16ab29973c | "2022-07-24T14:12:44Z" | c++ | "2022-08-08T05:28:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,522 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/blockNumber.cpp", "src/Functions/now.cpp", "src/Functions/nowInBlock.cpp", "src/Functions/registerFunctionsDateTime.cpp", "tests/queries/0_stateless/02372_now_in_block.reference", "tests/queries/0_stateless/02372_now_in_block.sql"] | `nowInBlock` function | **Use case**
Workload generation with continuous queries.
**Describe the solution you'd like**
Similar to now() but non-constant.
| https://github.com/ClickHouse/ClickHouse/issues/39522 | https://github.com/ClickHouse/ClickHouse/pull/39533 | 21d88f346fe7f4205576747123c4ac9d3302baca | 6fdcb009ffe3592f15148f25bc504a906ccfea02 | "2022-07-24T01:20:49Z" | c++ | "2022-07-25T01:22:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,521 | ["programs/obfuscator/Obfuscator.cpp", "tests/queries/1_stateful/00175_obfuscator_schema_inference.reference", "tests/queries/1_stateful/00175_obfuscator_schema_inference.sh"] | `clickhouse-obfuscator`: add schema inference | Don't require the `--structure` argument. | https://github.com/ClickHouse/ClickHouse/issues/39521 | https://github.com/ClickHouse/ClickHouse/pull/40120 | 4c30dbc9059061184059c03d179f2e0a85d88cac | 95847775b697df969859c4c02f2df72f4fc99d00 | "2022-07-24T01:13:43Z" | c++ | "2022-09-05T20:03:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,511 | ["src/Interpreters/DDLWorker.cpp", "tests/queries/0_stateless/02319_sql_standard_create_drop_index.reference", "tests/queries/0_stateless/02319_sql_standard_create_drop_index.sql"] | CREATE INDEX on ReplicatedMergeTree table in a Replicated database causes infinite failure loop | Hey,
I'd like to report a bug using the new CREATE INDEX syntax.
**Description**
- Using ClickHouse version 22.7.1.2484, in a cluster with 2 shards and 2 replicas for each shard (total of 4 nodes)
- When using the new CREATE INDEX statement added in this version, on a ReplicatedMergeTree table that exists inside a database using the Replicated engine, the index is created but an infinite loop of exceptions begins in the log.
- The only workaround i found is to manually edit the log_ptr in the ZooKeeper to point to the next log record, so it wont try to execute the problematic statement again.
- The index is actually created successfully.
- This problem doesn't happen when using ALTER TABLE ... ADD INDEX statement.
**How to reproduce?**
1. `CREATE DATABASE tests ENGINE = Replicated('/dbs/test', '{shard}', '{replica}')`
2. Execute this statement on each node in the cluster (my topology is 2 shards each with 2 replicas)
3. `CREATE TABLE tests.local_testTable
(
InsertionTime Datetime32,
Blah1 String,
Blah2 Int32
)
ENGINE=ReplicatedMergeTree
PARTITION BY toYYYYMM(InsertionTime)
ORDER BY InsertionTime`
4. `CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5`
**The log**
This is part of the log. the exception keeps happening.
clickhouse03 | 2022.07.22 22:12:02.642951 [ 236 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000005: Entry query-0000000005 is a dummy task
clickhouse03 | 2022.07.22 22:12:02.644341 [ 236 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000006: Entry query-0000000006 hasn't been committed
clickhouse03 | 2022.07.22 22:12:02.647018 [ 236 ] {} <Debug> DDLWorker(tests): Processing task query-0000000007 (CREATE INDEX IX_testsBlah_Blah2_MinMax ON local_testTable(Blah2) TYPE MINMAX GRANULARITY 5)
clickhouse03 | 2022.07.22 22:12:02.653851 [ 236 ] {} <Debug> DDLWorker(tests): Executing query: CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5
clickhouse03 | 2022.07.22 22:12:02.654110 [ 236 ] {3d53abd5-e239-4a93-911f-9aac7d44855e} <Debug> executeQuery: (from 0.0.0.0:0, user: ) /* ddl_entry=query-0000000007 */ CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5 (stage: Complete)
clickhouse03 | 2022.07.22 22:12:02.654800 [ 236 ] {3d53abd5-e239-4a93-911f-9aac7d44855e} <Error> executeQuery: Code: 44. DB::Exception: Cannot add index IX_testsBlah_Blah2_MinMax: index with this name already exists. (ILLEGAL_COLUMN) (version 22.7.1.2484 (official build)) (from 0.0.0.0:0) (in query: /* ddl_entry=query-0000000007 */ CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5), Stack trace (when copying this message, always include the lines below):
clickhouse03 |
clickhouse03 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
clickhouse03 | 1. DB::AlterCommand::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x17042d3b in /usr/bin/clickhouse
clickhouse03 | 2. DB::AlterCommands::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x1704cb4e in /usr/bin/clickhouse
clickhouse03 | 3. DB::MergeTreeData::checkAlterIsPossible(DB::AlterCommands const&, std::__1::shared_ptr<DB::Context const>) const @ 0x1757c7e7 in /usr/bin/clickhouse
clickhouse03 | 4. DB::InterpreterCreateIndexQuery::execute() @ 0x16b6f501 in /usr/bin/clickhouse
clickhouse03 | 5. ? @ 0x16ecdbd7 in /usr/bin/clickhouse
clickhouse03 | 6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x16ed2033 in /usr/bin/clickhouse
clickhouse03 | 7. DB::DDLWorker::tryExecuteQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x166729b4 in /usr/bin/clickhouse
clickhouse03 | 8. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x1667135d in /usr/bin/clickhouse
clickhouse03 | 9. DB::DDLWorker::scheduleTasks(bool) @ 0x1666f213 in /usr/bin/clickhouse
clickhouse03 | 10. DB::DDLWorker::runMainThread() @ 0x16668f7c in /usr/bin/clickhouse
clickhouse03 | 11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1667d089 in /usr/bin/clickhouse
clickhouse03 | 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046a8 in /usr/bin/clickhouse
clickhouse03 | 13. ? @ 0xbb07a3d in /usr/bin/clickhouse
clickhouse03 | 14. ? @ 0x7fe4948c0609 in ?
clickhouse03 | 15. clone @ 0x7fe4947e5133 in ?
clickhouse03 |
clickhouse03 | 2022.07.22 22:12:02.655100 [ 236 ] {3d53abd5-e239-4a93-911f-9aac7d44855e} <Error> DDLWorker(tests): Query CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5 wasn't finished successfully: Code: 44. DB::Exception: Cannot add index IX_testsBlah_Blah2_MinMax: index with this name already exists. (ILLEGAL_COLUMN), Stack trace (when copying this message, always include the lines below):
clickhouse03 |
clickhouse03 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
clickhouse03 | 1. DB::AlterCommand::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x17042d3b in /usr/bin/clickhouse
clickhouse03 | 2. DB::AlterCommands::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x1704cb4e in /usr/bin/clickhouse
clickhouse03 | 3. DB::MergeTreeData::checkAlterIsPossible(DB::AlterCommands const&, std::__1::shared_ptr<DB::Context const>) const @ 0x1757c7e7 in /usr/bin/clickhouse
clickhouse03 | 4. DB::InterpreterCreateIndexQuery::execute() @ 0x16b6f501 in /usr/bin/clickhouse
clickhouse03 | 5. ? @ 0x16ecdbd7 in /usr/bin/clickhouse
clickhouse03 | 6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x16ed2033 in /usr/bin/clickhouse
clickhouse03 | 7. DB::DDLWorker::tryExecuteQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x166729b4 in /usr/bin/clickhouse
clickhouse03 | 8. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x1667135d in /usr/bin/clickhouse
clickhouse03 | 9. DB::DDLWorker::scheduleTasks(bool) @ 0x1666f213 in /usr/bin/clickhouse
clickhouse03 | 10. DB::DDLWorker::runMainThread() @ 0x16668f7c in /usr/bin/clickhouse
clickhouse03 | 11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1667d089 in /usr/bin/clickhouse
clickhouse03 | 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046a8 in /usr/bin/clickhouse
clickhouse03 | 13. ? @ 0xbb07a3d in /usr/bin/clickhouse
clickhouse03 | 14. ? @ 0x7fe4948c0609 in ?
clickhouse03 | 15. clone @ 0x7fe4947e5133 in ?
clickhouse03 | (version 22.7.1.2484 (official build))
clickhouse03 | 2022.07.22 22:12:02.655153 [ 236 ] {3d53abd5-e239-4a93-911f-9aac7d44855e} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
clickhouse03 | 2022.07.22 22:12:02.659441 [ 236 ] {} <Error> DDLWorker(tests): Unexpected error, will try to restart main thread: Code: 341. DB::Exception: Unexpected error: 44
clickhouse03 | Code: 44. DB::Exception: Cannot add index IX_testsBlah_Blah2_MinMax: index with this name already exists. (ILLEGAL_COLUMN) (version 22.7.1.2484 (official build)). (UNFINISHED), Stack trace (when copying this message, always include the lines below):
clickhouse03 |
clickhouse03 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
clickhouse03 | 1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0xba83198 in /usr/bin/clickhouse
clickhouse03 | 2. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x166718ad in /usr/bin/clickhouse
clickhouse03 | 3. DB::DDLWorker::scheduleTasks(bool) @ 0x1666f213 in /usr/bin/clickhouse
clickhouse03 | 4. DB::DDLWorker::runMainThread() @ 0x16668f7c in /usr/bin/clickhouse
clickhouse03 | 5. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1667d089 in /usr/bin/clickhouse
clickhouse03 | 6. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046a8 in /usr/bin/clickhouse
clickhouse03 | 7. ? @ 0xbb07a3d in /usr/bin/clickhouse
clickhouse03 | 8. ? @ 0x7fe4948c0609 in ?
clickhouse03 | 9. clone @ 0x7fe4947e5133 in ?
clickhouse03 | (version 22.7.1.2484 (official build))
clickhouse03 | 2022.07.22 22:12:02.659643 [ 236 ] {} <Information> DDLWorker(tests): Cleaned DDLWorker state
clickhouse03 | 2022.07.22 22:12:03.401779 [ 214 ] {} <Debug> DNSResolver: Updating DNS cache
clickhouse03 | 2022.07.22 22:12:03.403668 [ 214 ] {} <Debug> DNSResolver: Updated DNS cache
clickhouse03 | 2022.07.22 22:12:03.458538 [ 87 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 230.82 GiB.
clickhouse02 | 2022.07.22 22:12:03.672911 [ 93 ] {} <Debug> DNSResolver: Updating DNS cache
clickhouse02 | 2022.07.22 22:12:03.674850 [ 93 ] {} <Debug> DNSResolver: Updated DNS cache
clickhouse02 | 2022.07.22 22:12:03.710897 [ 87 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 230.82 GiB.
clickhouse02 | 2022.07.22 22:12:03.712491 [ 94 ] {} <Debug> system.query_log (396a2721-59d3-43e0-b261-e795b4974efa) (MergerMutator): Selected 6 parts from 202207_1_56_11 to 202207_61_61_0
clickhouse02 | 2022.07.22 22:12:03.712565 [ 94 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 230.82 GiB.
clickhouse02 | 2022.07.22 22:12:03.712737 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTask::PrepareStage: Merging 6 parts: from 202207_1_56_11 to 202207_61_61_0 into Compact
clickhouse02 | 2022.07.22 22:12:03.713024 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
clickhouse02 | 2022.07.22 22:12:03.713198 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202207_1_56_11, total 90 rows starting from the beginning of the part
clickhouse02 | 2022.07.22 22:12:03.713834 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202207_57_57_0, total 2 rows starting from the beginning of the part
clickhouse02 | 2022.07.22 22:12:03.714327 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202207_58_58_0, total 1 rows starting from the beginning of the part
clickhouse02 | 2022.07.22 22:12:03.714875 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202207_59_59_0, total 2 rows starting from the beginning of the part
clickhouse02 | 2022.07.22 22:12:03.715424 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202207_60_60_0, total 1 rows starting from the beginning of the part
clickhouse02 | 2022.07.22 22:12:03.715979 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202207_61_61_0, total 2 rows starting from the beginning of the part
clickhouse02 | 2022.07.22 22:12:03.721938 [ 52 ] {396a2721-59d3-43e0-b261-e795b4974efa::202207_1_61_12} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 98 rows, containing 68 columns (68 merged, 0 gathered) in 0.0092471 sec., 10597.917184847141 rows/sec., 30.92 MiB/sec.
clickhouse02 | 2022.07.22 22:12:03.724269 [ 52 ] {} <Debug> MemoryTracker: Peak memory usage Mutate/Merge: 4.12 MiB.
clickhouse04 | 2022.07.22 22:12:04.150026 [ 138 ] {} <Debug> DNSResolver: Updating DNS cache
clickhouse01 | 2022.07.22 22:12:04.149923 [ 157 ] {} <Debug> DNSResolver: Updating DNS cache
clickhouse01 | 2022.07.22 22:12:04.151499 [ 157 ] {} <Debug> DNSResolver: Updated DNS cache
clickhouse04 | 2022.07.22 22:12:04.151653 [ 138 ] {} <Debug> DNSResolver: Updated DNS cache
clickhouse02 | 2022.07.22 22:12:07.627785 [ 235 ] {} <Debug> DDLWorker(tests): Initialized DDLWorker thread
clickhouse02 | 2022.07.22 22:12:07.627859 [ 235 ] {} <Debug> DDLWorker(tests): Scheduling tasks
clickhouse02 | 2022.07.22 22:12:07.628706 [ 235 ] {} <Debug> DDLWorker(tests): Will schedule 5 tasks starting from query-0000000003
clickhouse02 | 2022.07.22 22:12:07.635462 [ 235 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000003: Entry query-0000000003 is a dummy task
clickhouse02 | 2022.07.22 22:12:07.641258 [ 235 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000004: Entry query-0000000004 is a dummy task
clickhouse02 | 2022.07.22 22:12:07.647493 [ 235 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000005: Entry query-0000000005 is a dummy task
clickhouse02 | 2022.07.22 22:12:07.649054 [ 235 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000006: Entry query-0000000006 hasn't been committed
clickhouse02 | 2022.07.22 22:12:07.651712 [ 235 ] {} <Debug> DDLWorker(tests): Processing task query-0000000007 (CREATE INDEX IX_testsBlah_Blah2_MinMax ON local_testTable(Blah2) TYPE MINMAX GRANULARITY 5)
clickhouse02 | 2022.07.22 22:12:07.655342 [ 235 ] {} <Debug> DDLWorker(tests): Executing query: CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5
clickhouse02 | 2022.07.22 22:12:07.655767 [ 235 ] {f347ba35-14b7-4709-b253-9bb15914d496} <Debug> executeQuery: (from 0.0.0.0:0, user: ) /* ddl_entry=query-0000000007 */ CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5 (stage: Complete)
clickhouse02 | 2022.07.22 22:12:07.656909 [ 235 ] {f347ba35-14b7-4709-b253-9bb15914d496} <Error> executeQuery: Code: 44. DB::Exception: Cannot add index IX_testsBlah_Blah2_MinMax: index with this name already exists. (ILLEGAL_COLUMN) (version 22.7.1.2484 (official build)) (from 0.0.0.0:0) (in query: /* ddl_entry=query-0000000007 */ CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5), Stack trace (when copying this message, always include the lines below):
clickhouse02 |
clickhouse02 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
clickhouse02 | 1. DB::AlterCommand::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x17042d3b in /usr/bin/clickhouse
clickhouse02 | 2. DB::AlterCommands::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x1704cb4e in /usr/bin/clickhouse
clickhouse02 | 3. DB::MergeTreeData::checkAlterIsPossible(DB::AlterCommands const&, std::__1::shared_ptr<DB::Context const>) const @ 0x1757c7e7 in /usr/bin/clickhouse
clickhouse02 | 4. DB::InterpreterCreateIndexQuery::execute() @ 0x16b6f501 in /usr/bin/clickhouse
clickhouse02 | 5. ? @ 0x16ecdbd7 in /usr/bin/clickhouse
clickhouse02 | 6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x16ed2033 in /usr/bin/clickhouse
clickhouse02 | 7. DB::DDLWorker::tryExecuteQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x166729b4 in /usr/bin/clickhouse
clickhouse02 | 8. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x1667135d in /usr/bin/clickhouse
clickhouse02 | 9. DB::DDLWorker::scheduleTasks(bool) @ 0x1666f213 in /usr/bin/clickhouse
clickhouse02 | 10. DB::DDLWorker::runMainThread() @ 0x16668f7c in /usr/bin/clickhouse
clickhouse02 | 11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1667d089 in /usr/bin/clickhouse
clickhouse02 | 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046a8 in /usr/bin/clickhouse
clickhouse02 | 13. ? @ 0xbb07a3d in /usr/bin/clickhouse
clickhouse02 | 14. ? @ 0x7f764702c609 in ?
clickhouse02 | 15. clone @ 0x7f7646f51133 in ?
clickhouse02 |
clickhouse02 | 2022.07.22 22:12:07.657163 [ 235 ] {f347ba35-14b7-4709-b253-9bb15914d496} <Error> DDLWorker(tests): Query CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5 wasn't finished successfully: Code: 44. DB::Exception: Cannot add index IX_testsBlah_Blah2_MinMax: index with this name already exists. (ILLEGAL_COLUMN), Stack trace (when copying this message, always include the lines below):
clickhouse02 |
clickhouse02 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
clickhouse02 | 1. DB::AlterCommand::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x17042d3b in /usr/bin/clickhouse
clickhouse02 | 2. DB::AlterCommands::apply(DB::StorageInMemoryMetadata&, std::__1::shared_ptr<DB::Context const>) const @ 0x1704cb4e in /usr/bin/clickhouse
clickhouse02 | 3. DB::MergeTreeData::checkAlterIsPossible(DB::AlterCommands const&, std::__1::shared_ptr<DB::Context const>) const @ 0x1757c7e7 in /usr/bin/clickhouse
clickhouse02 | 4. DB::InterpreterCreateIndexQuery::execute() @ 0x16b6f501 in /usr/bin/clickhouse
clickhouse02 | 5. ? @ 0x16ecdbd7 in /usr/bin/clickhouse
clickhouse02 | 6. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x16ed2033 in /usr/bin/clickhouse
clickhouse02 | 7. DB::DDLWorker::tryExecuteQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x166729b4 in /usr/bin/clickhouse
clickhouse02 | 8. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x1667135d in /usr/bin/clickhouse
clickhouse02 | 9. DB::DDLWorker::scheduleTasks(bool) @ 0x1666f213 in /usr/bin/clickhouse
clickhouse02 | 10. DB::DDLWorker::runMainThread() @ 0x16668f7c in /usr/bin/clickhouse
clickhouse02 | 11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1667d089 in /usr/bin/clickhouse
clickhouse02 | 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046a8 in /usr/bin/clickhouse
clickhouse02 | 13. ? @ 0xbb07a3d in /usr/bin/clickhouse
clickhouse03 | 2022.07.22 22:12:07.664076 [ 236 ] {} <Debug> DDLWorker(tests): Initialized DDLWorker thread
clickhouse02 | 14. ? @ 0x7f764702c609 in ?
clickhouse03 | 2022.07.22 22:12:07.664137 [ 236 ] {} <Debug> DDLWorker(tests): Scheduling tasks
clickhouse02 | 15. clone @ 0x7f7646f51133 in ?
clickhouse03 | 2022.07.22 22:12:07.664819 [ 236 ] {} <Debug> DDLWorker(tests): Will schedule 5 tasks starting from query-0000000003
clickhouse02 | (version 22.7.1.2484 (official build))
clickhouse02 | 2022.07.22 22:12:07.657214 [ 235 ] {f347ba35-14b7-4709-b253-9bb15914d496} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
clickhouse02 | 2022.07.22 22:12:07.662688 [ 235 ] {} <Error> DDLWorker(tests): Unexpected error, will try to restart main thread: Code: 341. DB::Exception: Unexpected error: 44
clickhouse02 | Code: 44. DB::Exception: Cannot add index IX_testsBlah_Blah2_MinMax: index with this name already exists. (ILLEGAL_COLUMN) (version 22.7.1.2484 (official build)). (UNFINISHED), Stack trace (when copying this message, always include the lines below):
clickhouse02 |
clickhouse02 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xba37dda in /usr/bin/clickhouse
clickhouse02 | 1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0xba83198 in /usr/bin/clickhouse
clickhouse02 | 2. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x166718ad in /usr/bin/clickhouse
clickhouse02 | 3. DB::DDLWorker::scheduleTasks(bool) @ 0x1666f213 in /usr/bin/clickhouse
clickhouse02 | 4. DB::DDLWorker::runMainThread() @ 0x16668f7c in /usr/bin/clickhouse
clickhouse02 | 5. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1667d089 in /usr/bin/clickhouse
clickhouse02 | 6. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xbb046a8 in /usr/bin/clickhouse
clickhouse02 | 7. ? @ 0xbb07a3d in /usr/bin/clickhouse
clickhouse02 | 8. ? @ 0x7f764702c609 in ?
clickhouse02 | 9. clone @ 0x7f7646f51133 in ?
clickhouse02 | (version 22.7.1.2484 (official build))
clickhouse02 | 2022.07.22 22:12:07.662752 [ 235 ] {} <Information> DDLWorker(tests): Cleaned DDLWorker state
clickhouse03 | 2022.07.22 22:12:07.670201 [ 236 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000003: Entry query-0000000003 is a dummy task
clickhouse03 | 2022.07.22 22:12:07.676692 [ 236 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000004: Entry query-0000000004 is a dummy task
clickhouse03 | 2022.07.22 22:12:07.682419 [ 236 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000005: Entry query-0000000005 is a dummy task
clickhouse03 | 2022.07.22 22:12:07.684090 [ 236 ] {} <Debug> DDLWorker(tests): Will not execute task query-0000000006: Entry query-0000000006 hasn't been committed
clickhouse03 | 2022.07.22 22:12:07.687667 [ 236 ] {} <Debug> DDLWorker(tests): Processing task query-0000000007 (CREATE INDEX IX_testsBlah_Blah2_MinMax ON local_testTable(Blah2) TYPE MINMAX GRANULARITY 5)
clickhouse03 | 2022.07.22 22:12:07.691499 [ 236 ] {} <Debug> DDLWorker(tests): Executing query: CREATE INDEX IX_testsBlah_Blah2_MinMax ON tests.local_testTable(Blah2) TYPE MINMAX GRANULARITY 5
Thanks! | https://github.com/ClickHouse/ClickHouse/issues/39511 | https://github.com/ClickHouse/ClickHouse/pull/39565 | 8fc075a527bdb5f1c4b55bd29436bd3b52670547 | 44463cfca0af49bd50a6a1b994fe7fcac024f959 | "2022-07-22T19:14:50Z" | c++ | "2022-07-27T10:21:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,476 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md"] | projections are production ready but stays experimental in doc | By the ClickHouse [release v22.2, 2022-02-17](https://github.com/ClickHouse/ClickHouse/blob/master/CHANGELOG.md#new-feature-5) the projections are set as production ready:
> Projections are production ready. Set allow_experimental_projection_optimization by default and deprecate this setting. https://github.com/ClickHouse/ClickHouse/pull/34456 ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
But [(at least english) doc](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/#projections) still says:
> Projections are an experimental feature. To enable them you must set the [allow_experimental_projection_optimization](https://clickhouse.com/docs/en/operations/settings/settings#allow-experimental-projection-optimization) to 1. See also the [force_optimize_projection](https://clickhouse.com/docs/en/operations/settings/settings#force-optimize-projection) setting.
| https://github.com/ClickHouse/ClickHouse/issues/39476 | https://github.com/ClickHouse/ClickHouse/pull/39502 | 75d0232265a66bb889af7b9c7123a147fcc0f202 | b6cb0e33f59458f96634c380c3b6f976b19c5981 | "2022-07-21T19:54:32Z" | c++ | "2022-07-25T20:41:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,469 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/SelectQueryInfo.h", "tests/queries/0_stateless/02371_select_projection_normal_agg.reference", "tests/queries/0_stateless/02371_select_projection_normal_agg.sql"] | Projection select query exception. Not found column toStartOfHour(datetime) in block. |
**Describe what's wrong**
I am testing projection. An exception was found during query: ```DB::Exception: Not found column toStartOfHour(datetime) in block.```
**Does it reproduce on recent release?**
v22.3.4.20-lts
**How to reproduce**
* My ClickHouse server version is 22.3.4.20 and 22.7.1
```
CREATE TABLE video_log
(
`datetime` DateTime, -- 20,000 records per second
`user_id` UInt64, -- Cardinality == 100,000,000
`device_id` UInt64, -- Cardinality == 200,000,000
`domain` LowCardinality(String), -- Cardinality == 100
`bytes` UInt64, -- Ranging from 128 to 1152
`duration` UInt64 -- Ranging from 100 to 400
)
ENGINE = MergeTree
PARTITION BY toDate(datetime) -- Daily partitioning
ORDER BY (user_id, device_id); -- Can only favor one column here
CREATE TABLE rng
(
`user_id_raw` UInt64,
`device_id_raw` UInt64,
`domain_raw` UInt64,
`bytes_raw` UInt64,
`duration_raw` UInt64
)
ENGINE = GenerateRandom(1024);
INSERT INTO video_log SELECT
toUnixTimestamp(toDateTime(today()))
+ (rowNumberInAllBlocks() / 20000),
user_id_raw % 100000000 AS user_id,
device_id_raw % 200000000 AS device_id,
domain_raw % 100,
(bytes_raw % 1024) + 128,
(duration_raw % 300) + 100
FROM rng
LIMIT 17280000;
ALTER TABLE video_log ADD PROJECTION p_norm
(
SELECT
datetime,
device_id,
bytes,
duration
ORDER BY device_id
);
ALTER TABLE video_log MATERIALIZE PROJECTION p_norm;
ALTER TABLE video_log ADD PROJECTION p_agg
(
SELECT
toStartOfHour(datetime) AS hour,
domain,
sum(bytes),
avg(duration)
GROUP BY
hour,
domain
);
ALTER TABLE video_log MATERIALIZE PROJECTION p_agg;
```
```
SELECT
toStartOfHour(datetime) AS hour,
sum(bytes),
avg(duration)
FROM video_log
WHERE (toDate(hour) = today()) AND (device_id = '100')
GROUP BY hour
Query id: ffc3bc0c-26cf-4415-9a00-d2184130c30d
0 rows in set. Elapsed: 0.035 sec.
Received exception from server (version 22.7.1):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column toStartOfHour(datetime) in block. (NOT_FOUND_COLUMN_IN_BLOCK)
```
| https://github.com/ClickHouse/ClickHouse/issues/39469 | https://github.com/ClickHouse/ClickHouse/pull/39470 | 0dc183921f96f0b7738b59e1527c150c6dd1704f | 201cf69854e8486c8c002e42775a9b75282ba4f4 | "2022-07-21T15:50:13Z" | c++ | "2022-08-13T00:13:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,468 | ["src/Common/TLDListsHolder.cpp", "src/Common/TLDListsHolder.h", "src/Functions/URL/ExtractFirstSignificantSubdomain.h", "src/Functions/URL/FirstSignificantSubdomainCustomImpl.h", "tests/queries/0_stateless/01601_custom_tld.reference", "tests/queries/0_stateless/01601_custom_tld.sql"] | FirstSignificantSubdomain & Public Suffix list - lack of support for asterisk rules | For [the rule in the list](https://github.com/ClickHouse/ClickHouse/blob/82fc7375dd3ac4faae6c3b0db9f3a28fb2802d05/tests/config/top_level_domains/public_suffix_list.dat#L6430) `*.sch.uk`
and domain like `www.something.sheffield.sch.uk` the correct / expected result is `sheffield.sch.uk`
Example with python (correct)
```
-- pip install publicsuffix2
-- python3
>>> from publicsuffix2 import get_public_suffix
>>> get_public_suffix('www.something.sheffield.sch.uk')
'something.sheffield.sch.uk'
```
Same from clickhouse:
```
SELECT cutToFirstSignificantSubdomain('www.something.sheffield.sch.uk')
Query id: 9661461f-a393-43ea-97a1-47c0a8850515
ββcutToFirstSignificantSubdomain('www.something.sheffield.sch.uk')ββ
β sch.uk β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
/cc @azat | https://github.com/ClickHouse/ClickHouse/issues/39468 | https://github.com/ClickHouse/ClickHouse/pull/39496 | 7666c4e39c165b0617a343107ab289782bcd3411 | 904a05ac21f1c44dadd130582b01536935e6fd4d | "2022-07-21T15:22:18Z" | c++ | "2022-07-27T06:55:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,419 | ["src/Columns/ColumnArray.cpp", "src/DataTypes/Serializations/SerializationTuple.cpp", "tests/queries/0_stateless/02381_parse_array_of_tuples.reference", "tests/queries/0_stateless/02381_parse_array_of_tuples.sql"] | offsets_column has data inconsistent with nested_column: data for INSERT was parsed from query. (LOGICAL_ERROR) | https://pastila.nl/?00ad187f/ecfb3d92d2e879533cc3aea0536ab3d7
22.3 Exception on client:
Code: 49. DB::Exception: offsets_column has data inconsistent with nested_column: data for INSERT was parsed from query. (LOGICAL_ERROR)
22.7 the insert hangs
21.8.4 Received exception from server (version 21.8.4): Code: 62. DB::Exception: Received from localhost:9000. DB::Exception: Empty query.
| https://github.com/ClickHouse/ClickHouse/issues/39419 | https://github.com/ClickHouse/ClickHouse/pull/40034 | 6bec0f5854c96d256c57de4df8f70ba5520eefaa | c746dcd6449ca300f762757525987624a8dc52e9 | "2022-07-20T15:06:49Z" | c++ | "2022-08-11T10:10:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,287 | ["docs/en/sql-reference/functions/arithmetic-functions.md", "docs/ru/sql-reference/functions/arithmetic-functions.md", "docs/zh/sql-reference/functions/arithmetic-functions.md"] | modulo function behavior doesn't match description | ERROR: type should be string, got "https://clickhouse.com/docs/en/sql-reference/functions/arithmetic-functions/#moduloa-b-a--b-operator\r\n\r\n**modulo(a, b), a % b operator**\r\n`\r\nCalculates the remainder after division. If arguments are floating-point numbers, they are pre-converted to integers by dropping the decimal portion. The remainder is taken in the same sense as in C++. Truncated division is used for negative numbers. An exception is thrown when dividing by zero or when dividing a minimal negative number by minus one.\r\n`\r\n\r\n```sql\r\nselect 11.5::Float64 % 3.5::Float64\r\n--\r\n1\r\n\r\n-- truncate decimal portion as in description:\r\n11.0::Float64 % 3.0::Float64\r\n--\r\n2\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/39287 | https://github.com/ClickHouse/ClickHouse/pull/40110 | 1da908ef7191aff1f6228d51c71b738c7820f28a | 35ee71a908ca3459ec16717518553a3ce3a8b331 | "2022-07-16T13:24:28Z" | c++ | "2022-08-11T20:14:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,248 | ["src/DataTypes/Serializations/SerializationDate.cpp", "src/IO/ReadHelpers.h", "tests/queries/0_stateless/02457_csv_parse_date_out_of_range.reference", "tests/queries/0_stateless/02457_csv_parse_date_out_of_range.sh"] | Parsing out of range dates in CSV lead to unexpected result | When you parse Date column which have values bigger than by different format
```sql
DROP TABLE IF EXISTS test_date_imports;
create temporary table test_date_imports (f String, t Date);
insert into test_date_imports(t) format TSV 2200-01-01
insert into test_date_imports format CSV 'csv',2200-01-01
insert into test_date_imports format JSONEachRow {"f":"JSONEachRow","t":"2200-01-01"}
insert into test_date_imports values ('values','2200-01-01')
SELECT * FROM test_date_imports FORMAT PrettyCompactMonoBlock;
ββfββββββββββββ¬ββββββββββtββ
β β 2149-06-06 β
β csv β 2020-07-27 β<-- it looks like parsing happened via Date32 or DateTime64 path, and then casted with overflow
β JSONEachRow β 2149-06-06 β
β values β 2149-06-06 β
βββββββββββββββ΄βββββββββββββ
TRUNCATE TABLE test_date_imports;
insert into test_date_imports(t) format TSV 9999-01-01
insert into test_date_imports format CSV 'csv',9999-01-01
insert into test_date_imports format JSONEachRow {"f":"JSONEachRow","t":"9999-01-01"}
insert into test_date_imports values ('values','9999-01-01')
SELECT * FROM test_date_imports FORMAT PrettyCompactMonoBlock
SELECT * FROM test_date_imports FORMAT PrettyCompactMonoBlock
ββfββββββββββββ¬ββββββββββtββ
β β 2149-06-06 β
β csv β 2104-06-06 β
β JSONEachRow β 2149-06-06 β
β values β 2149-06-06 β
βββββββββββββββ΄βββββββββββββ
4 rows in set. Elapsed: 0.002 sec.
```
P.S. It's not the edge case found by QA. It's a real-life case from the end-user report. IRL it was the '[end of time](https://softwareengineering.stackexchange.com/questions/164843/is-there-a-constant-for-end-of-time)' date exported by another database as '9999-12-31' which was always stored in ClickHouse as '2149-06-06' but after importing from CSV as '2104-06-06' | https://github.com/ClickHouse/ClickHouse/issues/39248 | https://github.com/ClickHouse/ClickHouse/pull/42044 | 2bc30634cbe97b443f9d1a55c4a40612bce44146 | 486780ba807d4b8078deb65ed0c94b21bb5821d3 | "2022-07-15T07:22:47Z" | c++ | "2022-10-19T11:10:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,242 | ["docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md", "docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md"] | SECURE flag no longer valid for MySQL external dictionary DDL query | **Describe what's wrong**
We use external MySQL dictionaries created by DDL queries like this one:
```SQL
CREATE DICTIONARY common.countries
(
`id` UInt64,
`code` String,
`name` String
)
PRIMARY KEY id
SOURCE(MYSQL(PORT '3306' USER 'staging' PASSWORD 'xxxxxxxxxxxxx' REPLICA (HOST 'staging-db' PRIORITY 1) DB 'staging_v4' WHERE 'type="Country"' TABLE places SECURE 0))
LIFETIME(MIN 3600 MAX 5400)
LAYOUT(HASHED())
```
Depending on the environment we either set `SECURE` to 0 or 1. It is working in our current ClickHouse version and with setting set to 0 on Staging and 1 in Production environments.
That was with ClickHouse version 21.8.5.7, which is our current production version.
With versions 22.3.6.5 onwards, after creating the dictionary and trying to access it, we receive:
```
SQL Error [36] [07000]: Code: 36. DB::Exception: Unexpected key `secure` in dictionary source configuration. (BAD_ARGUMENTS) (version 22.3.8.39 (official build))
, server ClickHouseNode(addr=http:127.0.0.1:14207, db=default)@-1765255297
```
Versions tested:
- 22.3.6.5
- 22.3.8.39
- 22.6.3.35
**Does it reproduce on recent release?**
Yes. Versions tested:
- 22.3.6.5
- 22.3.8.39
- 22.6.3.35
**Enable crash reporting**
We've reproduced the issue with crash reporting enabled.
**How to reproduce**
* ClickHouse server version to use: 22.3.6.5
* Which interface to use: Doesn't matter. Command line will be fine.
* Non-default settings: N/A
* Statements involved:
```SQL
CREATE DICTIONARY common.countries
(
`id` UInt64,
`code` String,
`name` String
)
PRIMARY KEY id
SOURCE(MYSQL(PORT '3306' USER 'staging' PASSWORD 'xxxxxxxxxxxxx' REPLICA (HOST 'staging-db' PRIORITY 1) DB 'staging_v4' WHERE 'type="Country"' TABLE places SECURE 0))
LIFETIME(MIN 3600 MAX 5400)
LAYOUT(HASHED())
```
* Sample data for all these tables: N/A
* Queries to run that lead to unexpected result: Any subsequent query to the external dictionary will provoke the error.
**Expected behavior**
We expect `SECURE` flag to work without giving errors, as it is documented, with versions newer than 21.8.5.7 .
**Error message and/or stacktrace**
```
SQL Error [36] [07000]: Code: 36. DB::Exception: Unexpected key `secure` in dictionary source configuration. (BAD_ARGUMENTS) (version 22.3.8.39 (official build))
, server ClickHouseNode(addr=http:127.0.0.1:14207, db=default)@-1765255297
```
**Additional context**
It does not matter if you use `SECURE` with value 0 or 1, it fails anyway.
With latest versions, it only works if you omit `SECURE` flag.
| https://github.com/ClickHouse/ClickHouse/issues/39242 | https://github.com/ClickHouse/ClickHouse/pull/39356 | 061e61919afb295083e3f581227e59067c164188 | b9c8feb2b24ee97b3b16c38a834ecd6400966e0d | "2022-07-14T18:33:08Z" | c++ | "2022-07-26T15:11:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,174 | ["docker/test/stress/run.sh"] | Upgrade check fails if threre's invalid mutation | Example: https://s3.amazonaws.com/clickhouse-test-reports/0/334b3f3c730053eb282491de12864239aa23ded2/stress_test__undefined__actions_.html
There are lot of error on server startup, because some mutation cannot be executed:
```
2022.07.13 11:29:46.963353 [ 926970 ] {} <Error> MutateFromLogEntryTask: virtual bool DB::ReplicatedMergeMutateTaskBase::executeStep(): Code: 6. DB::Exception: Cannot parse string 'Hello' as UInt64: syntax error at begin of string. Note: there are toUInt64OrZero and toUInt64OrNull functions, which returns zero/NULL instead of throwing exception.: while executing 'FUNCTION _CAST(value :: 2, 'UInt64' :: 3) -> _CAST(value, 'UInt64') UInt64 : 4': (while reading from part /var/lib/clickhouse/store/979/979f9e75-9058-4985-9369-d38296a45f93/20191002_1_1_0_2/): While executing MergeTreeInOrder. (CANNOT_PARSE_TEXT), Stack trace (when copying this message, always include the lines below):
2022.07.13 11:30:23.431180 [ 926991 ] {} <Error> MutateFromLogEntryTask: virtual bool DB::ReplicatedMergeMutateTaskBase::executeStep(): Code: 6. DB::Exception: Cannot parse string 'Hello' as UInt64: syntax error at begin of string. Note: there are toUInt64OrZero and toUInt64OrNull functions, which returns zero/NULL instead of throwing exception.: while executing 'FUNCTION _CAST(value :: 2, 'UInt64' :: 3) -> _CAST(value, 'UInt64') UInt64 : 4': (while reading from part /var/lib/clickhouse/store/979/979f9e75-9058-4985-9369-d38296a45f93/20191002_1_1_0_2/): While executing MergeTreeInOrder. (CANNOT_PARSE_TEXT), Stack trace (when copying this message, always include the lines below):
2022.07.13 11:30:24.107495 [ 926972 ] {} <Error> MutateFromLogEntryTask: virtual bool DB::ReplicatedMergeMutateTaskBase::executeStep(): Code: 6. DB::Exception: Cannot parse string 'Hello' as UInt64: syntax error at begin of string. Note: there are toUInt64OrZero and toUInt64OrNull functions, which returns zero/NULL instead of throwing exception.: while executing 'FUNCTION _CAST(value :: 2, 'UInt64' :: 3) -> _CAST(value, 'UInt64') UInt64 : 4': (while reading from part /var/lib/clickhouse/store/979/979f9e75-9058-4985-9369-d38296a45f93/20191002_1_1_0_2/): While executing MergeTreeInOrder. (CANNOT_PARSE_TEXT), Stack trace (when copying this message, always include the lines below):
```
Seems like this mutation was started by test `01414_mutations_and_errors_zookeeper`:
https://github.com/ClickHouse/ClickHouse/blob/4d4f29508140aa1e97f97c40e0e41e4889b41dc5/tests/queries/0_stateless/01414_mutations_and_errors_zookeeper.sh#L48
but `KILL MUTATION` was not executed for some reason (and it's totally fine for Stress Tests):
https://github.com/ClickHouse/ClickHouse/blob/4d4f29508140aa1e97f97c40e0e41e4889b41dc5/tests/queries/0_stateless/01414_mutations_and_errors_zookeeper.sh#L65
Probably similar issue may be caused by other tests as well. It's not clear how to handle it, all (trivial) solutions looks quite bad:
- disable tests like this in BC check
- kill all mutations before restarting server
- add suppressions for each specific error (like #39176)
cc: @Avogar | https://github.com/ClickHouse/ClickHouse/issues/39174 | https://github.com/ClickHouse/ClickHouse/pull/43436 | c636c323ea2fda5a9702f8fa82f1397b4f0d62a8 | ae8391c5dc377d0308104abf52d28b722ad95947 | "2022-07-13T12:20:53Z" | c++ | "2022-11-22T17:08:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 39,164 | ["src/Interpreters/ActionsDAG.cpp", "tests/queries/0_stateless/01881_union_header_mismatch_bug.sql"] | Invalid number of columns in chunk pushed to OutputPort. Expected 3, found 4 | https://s3.amazonaws.com/clickhouse-test-reports/0/ec24f730b12074446b334eafbf94e7d505cdec6c/stress_test__debug__actions_.html
```
SELECT toDateTime64(toString(toString('0000-00-00 00:00:000000-00-00 00:00:00', toDateTime64(toDateTime64('655.36', -2, NULL)))), NULL) FROM t1_00850 GLOBAL INNER JOIN (SELECT toDateTime64(toDateTime64('6553.6', '', NULL), NULL), * FROM (SELECT * FROM t2_00850) INNER JOIN (SELECT toDateTime64('6553.7', 1024, NULL), * FROM t1_00850) USING (dummy)) USING (dummy)
```
```
2022.07.13 00:00:07.023917 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> executeQuery: (from [::1]:41482) (comment: 00850_global_join_dups.sql) -- query from fuzzer
SELECT toDateTime64(toString(toString('0000-00-00 00:00:000000-00-00 00:00:00', toDateTime64(toDateTime64('655.36', -2, NULL)))), NULL) FROM t1_00850 GLOBAL INNER JOIN (SELECT toDateTime64(toDateTime64('6553.6', '', NULL), NULL), * FROM (SELECT * FROM t2_00850) INNER JOIN (SELECT toDateTime64('6553.7', 1024, NULL), * FROM t1_00850) USING (dummy)) USING (dummy); (stage: Complete)2022.07.13 00:00:07.233173 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t2_00850
2022.07.13 00:00:07.244994 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t1_00850
2022.07.13 00:00:07.288479 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t2_00850
2022.07.13 00:00:07.456588 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t1_00850
2022.07.13 00:00:07.461998 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> TableJoin: Left JOIN converting actions: empty
2022.07.13 00:00:07.464626 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> TableJoin: Right JOIN converting actions: empty
2022.07.13 00:00:07.465885 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> FullSortingMergeJoin: Will use full sorting merge join
2022.07.13 00:00:07.575593 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t2_00850
2022.07.13 00:00:07.621874 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t1_00850
2022.07.13 00:00:07.721637 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t1_00850
2022.07.13 00:00:07.724186 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> InterpreterSelectQuery: Complete -> Complete
2022.07.13 00:00:07.728390 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> TableJoin: Left JOIN converting actions: empty
2022.07.13 00:00:07.736440 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> TableJoin: Right JOIN converting actions: empty
2022.07.13 00:00:07.744422 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> FullSortingMergeJoin: Will use full sorting merge join
2022.07.13 00:00:07.852283 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t2_00850
2022.07.13 00:00:07.868928 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON test_21.t2_00850
2022.07.13 00:00:07.881920 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> InterpreterSelectQuery: Complete -> Complete
2022.07.13 00:00:07.889419 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2022.07.13 00:00:07.922932 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> JoiningTransform: Before join block: 'dummy UInt8 UInt8(size = 0)'
2022.07.13 00:00:07.925859 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> JoiningTransform: After join block: 'dummy UInt8 UInt8(size = 0), dummy UInt8 UInt8(size = 0), toDateTime64('6553.7', 1024, NULL) Nullable(Nothing) Nullable(size = 0, Nothing(size = 0), UInt8(size = 0))'
2022.07.13 00:00:07.974440 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> MergeJoinTransform: Use MergeJoinTransform
2022.07.13 00:00:08.024326 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> Connection (localhost:9000): Sent data for 2 scalars, total 2 rows in 0.000122579 sec., 16145 rows/sec., 16.20 KiB (127.63 MiB/sec.), no compression.
2022.07.13 00:00:08.028220 [ 1661 ] {1ffdf8a7-c768-47d7-a71e-bec1f6f27d81} <Debug> executeQuery: (from [::1]:41218, initial_query_id: bf6b0611-3b69-4a46-83e9-ba2bbcba5250) (comment: 00850_global_join_dups.sql) SELECT `t_local`.`dummy` FROM `test_21`.`t_local` (stage: Complete)
2022.07.13 00:00:08.094945 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Debug> Connection (localhost:9000): Sent data for 2 scalars, total 2 rows in 0.000134199 sec., 14767 rows/sec., 2.98 KiB (21.51 MiB/sec.), no compression.
2022.07.13 00:00:08.117853 [ 19108 ] {02f6446b-c5f1-4cc4-84ed-449bf05230db} <Debug> executeQuery: (from [::1]:41506, initial_query_id: bf6b0611-3b69-4a46-83e9-ba2bbcba5250) (comment: 00850_global_join_dups.sql) SELECT `t_local`.`dummy`, toDateTime64('6553.7', 1024, NULL) FROM `test_21`.`t_local` (stage: Complete)
2022.07.13 00:00:08.764100 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Test> Epoll: EINTR
2022.07.13 00:00:10.088155 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Test> Epoll: EINTR
2022.07.13 00:00:10.091679 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Test> Epoll: EINTR
2022.07.13 00:00:10.100270 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Test> Epoll: EINTR
2022.07.13 00:00:10.103504 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Test> Epoll: EINTR
2022.07.13 00:00:10.164106 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> MergeJoinAlgorithm: Finished pocessing in 2.192018023 seconds, left: 0 blocks, 0 rows; right: 0 blocks, 0 rows, max blocks loaded to memory: 0
2022.07.13 00:00:10.196465 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> MergeJoinAlgorithm: Finished pocessing in 2.224018286 seconds, left: 1 blocks, 1 rows; right: 1 blocks, 1 rows, max blocks loaded to memory: 0
2022.07.13 00:00:10.201542 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> MergeJoinAlgorithm: Finished pocessing in 2.228018319 seconds, left: 1 blocks, 1 rows; right: 1 blocks, 1 rows, max blocks loaded to memory: 0
2022.07.13 00:00:10.202640 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Trace> MergeJoinAlgorithm: Finished pocessing in 2.228018319 seconds, left: 1 blocks, 1 rows; right: 1 blocks, 1 rows, max blocks loaded to memory: 0
2022.07.13 00:00:10.223354 [ 20619 ] {bf6b0611-3b69-4a46-83e9-ba2bbcba5250} <Fatal> : Logical error: 'Invalid number of columns in chunk pushed to OutputPort. Expected 3, found 4
Header: dummy UInt8 UInt8(size = 0), dummy UInt8 UInt8(size = 0), toDateTime64('6553.7', 1024, NULL) Nullable(Nothing) Nullable(size = 0, Nothing(size = 0), UInt8(size = 0))
Chunk: UInt8(size = 1) UInt8(size = 1) Nullable(size = 1, Nothing(size = 1), UInt8(size = 1)) Nullable(size = 1, Nothing(size = 1), UInt8(size = 1))
'.
2022.07.13 00:17:12.841254 [ 21690 ] {} <Fatal> BaseDaemon: ########################################
2022.07.13 00:17:12.862732 [ 21690 ] {} <Fatal> BaseDaemon: (version 22.7.1.1 (official build), build id: 77418886F3BA724B) (from thread 20619) (query_id: bf6b0611-3b69-4a46-83e9-ba2bbcba5250) (query: -- query from fuzzer
2022.07.13 00:17:12.937565 [ 21690 ] {} <Fatal> BaseDaemon:
2022.07.13 00:17:12.950392 [ 21690 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f0f83bcb00b 0x7f0f83baa859 0x1709bd66 0x1709bdf5 0x1709bea5 0x26f2661c 0x2852c222 0x1731d31e 0x28019a2c 0x2800ef1f 0x2800f197 0x2800e312 0x2800d8b8 0x2800ba1e 0x26347c9b 0x26346d47 0x263468f2 0x26330377 0x26346857 0x26330362 0x26346857 0x26330362 0x2631bd23 0x2631b9f8 0x26a62947 0x26a6a0f7 0x26a4bbc3 0x26a48e52 0x26a47291 0x26adba59 0x26ad95a9 0x26ad8a87 0x269e9ad0 0x269e7f5d 0x26f2b559 0x26f28764 0x27f8d062 0x27f9c265 0x2c496179 0x2c496986 0x2c6dbcd4
2022.07.13 00:17:12.968547 [ 21690 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7f0f83bcb00b in ?
2022.07.13 00:17:12.970155 [ 21690 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7f0f83baa859 in ?
2022.07.13 00:17:13.345668 [ 21690 ] {} <Fatal> BaseDaemon: 6. /build/build_docker/../src/Common/Exception.cpp:40: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1709bd66 in /usr/bin/clickhouse
2022.07.13 00:17:13.701373 [ 21690 ] {} <Fatal> BaseDaemon: 7. /build/build_docker/../src/Common/Exception.cpp:63: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x1709bdf5 in /usr/bin/clickhouse
2022.07.13 00:17:14.044700 [ 21690 ] {} <Fatal> BaseDaemon: 8. /build/build_docker/../src/Common/Exception.cpp:70: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1709bea5 in /usr/bin/clickhouse
2022.07.13 00:17:15.040604 [ 21690 ] {} <Fatal> BaseDaemon: 9. /build/build_docker/../src/Common/Exception.h:37: DB::Exception::Exception<unsigned long, unsigned long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<unsigned long>::type, fmt::v8::type_identity<unsigned long>::type, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::type, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::type>, unsigned long&&, unsigned long&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0x26f2661c in /usr/bin/clickhouse
2022.07.13 00:17:15.336936 [ 21690 ] {} <Fatal> BaseDaemon: 10.1. inlined from /build/build_docker/../src/Processors/Port.h:408: DB::OutputPort::pushData(DB::Port::State::Data)
2022.07.13 00:17:15.351552 [ 21690 ] {} <Fatal> BaseDaemon: 10.2. inlined from ../src/Processors/Port.h:396: DB::OutputPort::push(DB::Chunk)
2022.07.13 00:17:15.378404 [ 21690 ] {} <Fatal> BaseDaemon: 10. ../src/Processors/Merges/IMergingTransform.cpp:157: DB::IMergingTransformBase::prepare() @ 0x2852c222 in /usr/bin/clickhouse
2022.07.13 00:17:16.665598 [ 21690 ] {} <Fatal> BaseDaemon: 11. /build/build_docker/../src/Processors/IProcessor.h:190: DB::IProcessor::prepare(std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&) @ 0x1731d31e in /usr/bin/clickhouse
2022.07.13 00:17:17.042596 [ 21690 ] {} <Fatal> BaseDaemon: 12. /build/build_docker/../src/Processors/Executors/ExecutingGraph.cpp:269: DB::ExecutingGraph::updateNode(unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&) @ 0x28019a2c in /usr/bin/clickhouse
2022.07.13 00:17:17.471569 [ 21690 ] {} <Fatal> BaseDaemon: 13. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:241: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x2800ef1f in /usr/bin/clickhouse
2022.07.13 00:17:17.979112 [ 21690 ] {} <Fatal> BaseDaemon: 14. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:187: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x2800f197 in /usr/bin/clickhouse
2022.07.13 00:17:18.415886 [ 21690 ] {} <Fatal> BaseDaemon: 15. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:331: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x2800e312 in /usr/bin/clickhouse
2022.07.13 00:17:18.974933 [ 21690 ] {} <Fatal> BaseDaemon: 16. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:88: DB::PipelineExecutor::execute(unsigned long) @ 0x2800d8b8 in /usr/bin/clickhouse
2022.07.13 00:17:19.221225 [ 21690 ] {} <Fatal> BaseDaemon: 17. /build/build_docker/../src/Processors/Executors/CompletedPipelineExecutor.cpp:98: DB::CompletedPipelineExecutor::execute() @ 0x2800ba1e in /usr/bin/clickhouse
2022.07.13 00:17:21.390334 [ 21690 ] {} <Fatal> BaseDaemon: 18. /build/build_docker/../src/Interpreters/GlobalSubqueriesVisitor.h:177: DB::GlobalSubqueriesMatcher::Data::addExternalStorage(std::__1::shared_ptr<DB::IAST>&, bool) @ 0x26347c9b in /usr/bin/clickhouse
2022.07.13 00:17:23.505326 [ 21690 ] {} <Fatal> BaseDaemon: 19. /build/build_docker/../src/Interpreters/GlobalSubqueriesVisitor.h:246: DB::GlobalSubqueriesMatcher::visit(DB::ASTTablesInSelectQueryElement&, std::__1::shared_ptr<DB::IAST>&, DB::GlobalSubqueriesMatcher::Data&) @ 0x26346d47 in /usr/bin/clickhouse
2022.07.13 00:17:25.467789 [ 21690 ] {} <Fatal> BaseDaemon: 20. /build/build_docker/../src/Interpreters/GlobalSubqueriesVisitor.h:200: DB::GlobalSubqueriesMatcher::visit(std::__1::shared_ptr<DB::IAST>&, DB::GlobalSubqueriesMatcher::Data&) @ 0x263468f2 in /usr/bin/clickhouse
2022.07.13 00:17:26.806039 [ 21690 ] {} <Fatal> BaseDaemon: 21. /build/build_docker/../src/Interpreters/InDepthNodeVisitor.h:34: DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x26330377 in /usr/bin/clickhouse
2022.07.13 00:17:28.887953 [ 21690 ] {} <Fatal> BaseDaemon: 22. /build/build_docker/../src/Interpreters/InDepthNodeVisitor.h:53: DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visitChildren(std::__1::shared_ptr<DB::IAST>&) @ 0x26346857 in /usr/bin/clickhouse
2022.07.13 00:17:30.662742 [ 21690 ] {} <Fatal> BaseDaemon: 23. /build/build_docker/../src/Interpreters/InDepthNodeVisitor.h:30: DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0x26330362 in /usr/bin/clickhouse
2022.07.13 00:17:32.191387 [ 21690 ] {} <Fatal> BaseDaemon: 24. /build/build_docker/../src/Interpreters/InDepthNodeVisitor.h:53: DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visitChildren(std::__1::shared_ptr<DB::IAST>&) @ 0x26346857 in /usr/bin/clickhouse
2022.07.13 00:17:33.144794 [ 679 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | https://github.com/ClickHouse/ClickHouse/issues/39164 | https://github.com/ClickHouse/ClickHouse/pull/39799 | 316528817b2458fd37960b965f9eef10b4d13535 | a3bf9496d4c1eaa231d40427bb3cc8c265667659 | "2022-07-13T09:26:33Z" | c++ | "2022-08-02T08:35:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,972 | ["utils/self-extracting-executable/compressor.cpp", "utils/self-extracting-executable/decompressor.cpp", "utils/self-extracting-executable/types.h"] | Self-extracting compressor utility should have --decompressor parameter. | **Use case**
We want to use self-extracting compressor utility during build, which can be cross-compiled.
Currently self-extracting compressor utility includes appended decompressor binary, which is a base for building self-extracting executable. The problem arise from the fact that self-extracting executable usually required to be built during clickhouse build - which can be cross-compiled - so in such case additional compressor in native architecture should be built and then this compressor should compress target binaries and attach to target decompressor built in the target architecture.
**Describe the solution you'd like**
Provide --decompressor parameter so proper decompressor binary can be chosen.
**Additional context**
ref #34755
| https://github.com/ClickHouse/ClickHouse/issues/38972 | https://github.com/ClickHouse/ClickHouse/pull/39065 | df190b14b289e279af94bceef4d16e9d4ff6326a | 2079699f7542527d62cbaed828940d54b0ca00a2 | "2022-07-07T19:06:20Z" | c++ | "2022-07-13T05:50:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,962 | ["src/Common/ZooKeeper/ZooKeeper.cpp", "src/Common/ZooKeeper/ZooKeeperCommon.h", "src/Coordination/KeeperDispatcher.cpp"] | OpNum for watch response doesn't exist (Run time inconsistency) | https://s3.amazonaws.com/clickhouse-test-reports/0/10dc83b7a9c18228c4a4fc0dcfe6b18d2427462e/stress_test__debug__actions_.html
I found weird exception while investigating why ephemeral nodes were not removed for a long time:
https://pastila.nl/?00ec9776/d8858e442f8453f9b9e31a2f1da3cc8b | https://github.com/ClickHouse/ClickHouse/issues/38962 | https://github.com/ClickHouse/ClickHouse/pull/38963 | f15d9ca59cf5edae887f77eb745207a8233bfc98 | 1d93dc1c9a2d4f619c5e9702efdf92f3c2a9d77a | "2022-07-07T15:05:58Z" | c++ | "2022-07-07T19:01:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,959 | ["docs/en/operations/performance-test.md"] | Manual steps for benchmark not working (might need new prepared partitions) | (you don't have to strictly follow this form)
**Describe the unexpected behaviour**
When following the manual steps for testing harware at https://clickhouse.com/docs/en/operations/performance-test `clickhouse server` fails to start and complains about the metadata for the default database missing or older than 20.7. The prepared partitions and metadata came from the ClickHouse S3:
```
2022.07.07 09:07:31.838952 [ 16989 ] {} <Error> Application: DB::Exception: Data directory for default database exists, but metadata file does not. Probably you are trying to upgrade from version older than 20.7. If so, you should upgrade through intermediate version.
```
(full log is down below)
**How to reproduce**
Follow the [manual steps](https://clickhouse.com/docs/en/operations/performance-test#manual-run)
* Which ClickHouse server version to use
ClickHouse server version 22.7.1.1503 (official build).
**Error message and/or stacktrace**
If applicable, add screenshots to help explain your problem.
```
2022.07.07 09:07:30.837076 [ 16989 ] {} <Error> Application: Caught exception while loading metadata: Code: 48. DB::Exception: Data directory for default database exists, but metadata file does not. Probably you are trying to upgrade from version older than 20.7. If so, you should upgrade through intermediate version. (NOT_IMPLEMENTED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb9f847a in /home/droscign/new/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xba64eb8 in /home/droscign/new/clickhouse
2. DB::checkUnsupportedVersion(std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x16a477c6 in /home/droscign/new/clickhouse
3. DB::loadMetadata(std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x16a4666c in /home/droscign/new/clickhouse
4. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xba83274 in /home/droscign/new/clickhouse
5. Poco::Util::Application::run() @ 0x1a384e86 in /home/droscign/new/clickhouse
6. DB::Server::run() @ 0xba71774 in /home/droscign/new/clickhouse
7. mainEntryClickHouseServer(int, char**) @ 0xba6ee47 in /home/droscign/new/clickhouse
8. main @ 0xb9f1990 in /home/droscign/new/clickhouse
9. ? @ 0x7f624826ed90 in ?
10. __libc_start_main @ 0x7f624826ee40 in ?
11. _start @ 0xb7c242e in /home/droscign/new/clickhouse
(version 22.7.1.1503 (official build))
2022.07.07 09:07:30.837470 [ 16989 ] {} <Information> Application: Shutting down storages.
2022.07.07 09:07:30.838515 [ 16989 ] {} <Debug> Application: Shut down storages.
2022.07.07 09:07:31.827973 [ 16989 ] {} <Debug> Application: Destroyed global context.
2022.07.07 09:07:31.838952 [ 16989 ] {} <Error> Application: DB::Exception: Data directory for default database exists, but metadata file does not. Probably you are trying to upgrade from version older than 20.7. If so, you should upgrade through intermediate version.
2022.07.07 09:07:31.838966 [ 16989 ] {} <Information> Application: shutting down
2022.07.07 09:07:31.838970 [ 16989 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2022.07.07 09:07:31.839002 [ 16990 ] {} <Trace> BaseDaemon: Received signal -2
2022.07.07 09:07:31.839018 [ 16990 ] {} <Information> BaseDaemon: Stop SignalListener thread
```
| https://github.com/ClickHouse/ClickHouse/issues/38959 | https://github.com/ClickHouse/ClickHouse/pull/39172 | 8f348edbbd1701ef35abec5060b3754bcc7dbbcc | df627a3e4dcf6e50438eb9677238482265f9cdfb | "2022-07-07T13:22:33Z" | c++ | "2022-07-30T02:23:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,930 | ["src/DataTypes/DataTypesDecimal.h", "src/DataTypes/getLeastSupertype.cpp", "tests/queries/0_stateless/02364_dictionary_datetime_64_attribute_crash.reference", "tests/queries/0_stateless/02364_dictionary_datetime_64_attribute_crash.sql"] | Segfault with Dictionary with DateTime64(9) attribute | ```sql
create table dat (blockNum Decimal(10,0), eventTimestamp DateTime64(9) )
Engine=MergeTree() primary key eventTimestamp;
insert into dat values (1, '2022-01-24 02:30:00.008122000');
CREATE DICTIONARY datDictionary
(
`blockNum` Decimal(10, 0),
`eventTimestamp` DateTime64(9)
)
PRIMARY KEY blockNum
SOURCE(CLICKHOUSE(TABLE 'dat'))
LIFETIME(MIN 0 MAX 1000)
LAYOUT(FLAT());
select * from datDictionary;
ββblockNumββ¬ββββββββββββββββeventTimestampββ
β 1 β 2022-01-24 02:30:00.008122000 β
ββββββββββββ΄ββββββββββββββββββββββββββββββββ
select count(*) from dat where eventTimestamp >= (select eventTimestamp from datDictionary);
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF)
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
```
```sql
drop dictionary datDictionary;
CREATE DICTIONARY datDictionary
(
`blockNum` Decimal(10, 0),
`eventTimestamp` DateTime64(9)
)
PRIMARY KEY blockNum
SOURCE(CLICKHOUSE(TABLE 'dat'))
LIFETIME(MIN 0 MAX 1000)
LAYOUT(complex_key_hashed());
select count(*) from dat where eventTimestamp >= (select eventTimestamp from datDictionary);
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 22.7.1.1054)
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
```
```sql
select * from datDictionary;
ββblockNumββ¬ββββββββββββββββeventTimestampββ
β 1 β 2022-01-24 02:30:00.008122000 β
ββββββββββββ΄ββββββββββββββββββββββββββββββββ
select dictGet('datDictionary', 'eventTimestamp', tuple(cast('1','Decimal(10,0)')));
Code: 407. DB::Exception: Received from localhost:9000. DB::Exception: Bad scale of decimal field. (DECIMAL_OVERFLOW)
```
| https://github.com/ClickHouse/ClickHouse/issues/38930 | https://github.com/ClickHouse/ClickHouse/pull/39391 | 3759ee76a73e6da62605baa61954ee75d9c77452 | 3b755ffb2a56a2b7205ab060060e20add5c13339 | "2022-07-06T19:27:03Z" | c++ | "2022-07-20T21:14:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,908 | ["src/Parsers/ExpressionListParsers.cpp", "src/TableFunctions/TableFunctionExecutable.cpp", "src/TableFunctions/TableFunctionExecutable.h", "tests/integration/test_executable_table_function/test.py"] | `executable()` table function has no way to set up settings available in Executable storage | Executable settings like `max_command_execution_time` or `command_termination_timeout` cannot be set for `executable()` table function at all. Which means that the hardcoded values will be used.
I suggest to add it to executable function definition.
The simplest way is:
```
SELECT * FROM executable('<script name>', '<format>', (SET max_command_execution_time=100), (SELECT * FROM table))
```
As executable constructor already parses input `SELECT` statements anyway, adding `SET` statement there seems pretty logical.
Ideas? Any other better API?
I can implement that one. | https://github.com/ClickHouse/ClickHouse/issues/38908 | https://github.com/ClickHouse/ClickHouse/pull/39681 | bf574b91547aec799364d032564606feb5a8bf03 | 31891322a51febe79ec3edba6278b5cecdd9e8df | "2022-07-06T14:49:07Z" | c++ | "2022-08-01T15:59:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,906 | ["contrib/vectorscan-cmake/CMakeLists.txt", "contrib/vectorscan-cmake/rageled_files/Parser.cpp", "contrib/vectorscan-cmake/rageled_files/aarch64/Parser.cpp", "contrib/vectorscan-cmake/rageled_files/aarch64/control_verbs.cpp", "contrib/vectorscan-cmake/rageled_files/amd64/Parser.cpp", "contrib/vectorscan-cmake/rageled_files/amd64/control_verbs.cpp", "contrib/vectorscan-cmake/rageled_files/control_verbs.cpp"] | Enable vectorscan for ARM | **Describe the unexpected behaviour**
VectorScan is cross-platform, yet ClickHouse enables it only for x86
**Additional context**
- #38171 replaced hyperscan by vectorscan which is a cross-platform fork of hyperscan. Due to wrong results on ARM (see the PR), we currently compile vectorscan only for x86. Need to check if a new version resolves the problem or continue investigation.
- Once done, we can remove the non-vectorscan fallback code in file "Functions/MultiMatchAnyImpl.h" | https://github.com/ClickHouse/ClickHouse/issues/38906 | https://github.com/ClickHouse/ClickHouse/pull/41033 | 9a0892c40c9bbe5f27981014468fccb1edf16d57 | 485262991e50788983c632694e4b7c45deec4c00 | "2022-07-06T14:17:45Z" | c++ | "2022-09-11T19:57:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,889 | ["src/Common/ShellCommand.cpp", "src/Common/waitForPid.cpp", "src/Common/waitForPid.h"] | ArrowStream feeding into executable has an unexpected slowdown of ~1 sec | Playing with various formats for the `executable()` table function got to a peculiar perf problem.
Query:
```sql
caspiandb6 :) SELECT * FROM executable('arrow_sum.py', 'ArrowStream', 'sum UInt64', (SELECT * FROM numbers(10*1000000)));
SELECT *
FROM executable('arrow_sum.py', 'ArrowStream', 'sum UInt64', (
SELECT *
FROM numbers(10 * 1000000)
))
Query id: b73e5936-22d1-4d4b-826e-18d3aab80e47
βββββββββββββsumββ
β 49999995000000 β
ββββββββββββββββββ
1 row in set. Elapsed: 1.131 sec.
```
1.131 sec ? seems slow as hell
Let's decompose the query:
```bash
$ time ./clickhouse-client --query="SELECT * FROM numbers(10*1000000) FORMAT ArrowStream" > /tmp/file
real 0m0.097s
user 0m0.019s
sys 0m0.065s
$ time cat /tmp/file | ./user_scripts/arrow_sum.py >/dev/null
real 0m0.128s
user 0m0.182s
sys 0m0.301s
```
0.097 + 0.128 = 0.225 sec
This looks about right.
Let's try different array sizes:
```sql
caspiandb6 :) SELECT * FROM executable('arrow_sum.py', 'ArrowStream', 'sum UInt64', (SELECT * FROM numbers(1*1000000)));
SELECT *
FROM executable('arrow_sum.py', 'ArrowStream', 'sum UInt64', (
SELECT *
FROM numbers(1 * 1000000)
))
Query id: 26965ad9-8115-4b59-a0cd-c062a7e0658a
βββββββββββsumββ
β 499999500000 β
ββββββββββββββββ
1 row in set. Elapsed: 1.094 sec.
caspiandb6 :) SELECT * FROM executable('arrow_sum.py', 'ArrowStream', 'sum UInt64', (SELECT * FROM numbers(100*1000000)));
SELECT *
FROM executable('arrow_sum.py', 'ArrowStream', 'sum UInt64', (
SELECT *
FROM numbers(100 * 1000000)
))
Query id: e3dea8a3-1fb6-4d25-9182-1d9aadd12e7b
βββββββββββββββsumββ
β 4999999950000000 β
ββββββββββββββββββββ
1 row in set. Elapsed: 1.302 sec.
```
It looks like 1 sec is just added to all queries.
Seems like a famous `sleep 1` problem somewhere in the code, frankly. :)
@kitaisreal any idea what could be the problem?
I will continue to poke it meanwhile.
For the sake of completeness:
`arrow_sum.py`
```python
#!/usr/bin/python3
import sys
import traceback
import pyarrow as pa
import pyarrow.compute as pc
if __name__ == "__main__":
fd = open("/tmp/debug.log", "w")
sum = pa.scalar(0, type="UInt64")
try:
with pa.ipc.open_stream(sys.stdin.buffer) as reader:
for b in reader:
sum = pc.add(sum, pc.sum(b[0]))
array = pa.array([sum.as_py()], type="UInt64")
out_batch = pa.record_batch([array], names=['sum'])
with pa.ipc.new_stream(sys.stdout.buffer, out_batch.schema) as writer:
writer.write(out_batch)
except Exception as e:
traceback.print_exc(file=fd)
fd.close()
sys.stdout.flush()
``` | https://github.com/ClickHouse/ClickHouse/issues/38889 | https://github.com/ClickHouse/ClickHouse/pull/38929 | eabdd6adbb7456927e1373d9c680a1b38262588d | d3ce4ffc88e314553e6caabccef6220bd4652f27 | "2022-07-06T09:13:54Z" | c++ | "2022-07-20T09:26:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,872 | ["src/DataTypes/DataTypeTuple.cpp", "src/DataTypes/DataTypeTuple.h", "src/Functions/tupleElement.cpp", "tests/queries/0_stateless/02354_tuple_element_with_default.reference", "tests/queries/0_stateless/02354_tuple_element_with_default.sql"] | tupleElement with default value | ```
select tupleElement(CAST(tuple(1,2),'Tuple(x UInt64, y UInt64)'), 'x')
-- ok
select tupleElement(CAST(tuple(1,2),'Tuple(x UInt64, y UInt64)'), 'z')
-- fails
-- Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Tuple doesn't have element with name 'z': While processing tupleElement(CAST((1, 2), 'Tuple(x UInt64, y UInt64)'), 'z'). (NOT_FOUND_COLUMN_IN_BLOCK)
```
Proposal - add the 3rd parameter to the tupleElement function and return it if tuple doesn't have a member.
Why? Tuple comes from object('JSON'), sometimes some fields are missing in some jsons. | https://github.com/ClickHouse/ClickHouse/issues/38872 | https://github.com/ClickHouse/ClickHouse/pull/38989 | 64a189249f18bf0d793388e64b46ab41f3893472 | 68aab132a5c311892b560d9b638cc0afcc2b1837 | "2022-07-05T21:25:43Z" | c++ | "2022-07-21T11:58:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,833 | ["docs/en/sql-reference/functions/string-replace-functions.md", "docs/ru/sql-reference/functions/string-replace-functions.md", "src/Functions/registerFunctionsStringRegexp.cpp", "src/Functions/translate.cpp", "tests/queries/0_stateless/02353_translate.reference", "tests/queries/0_stateless/02353_translate.sql"] | Multireplace / Transliterate? | > (you don't have to strictly follow this form)
**Use case**
Change the used aphabet for some encoding (see https://github.com/ClickHouse/ClickHouse/issues/38832, https://github.com/ClickHouse/ClickHouse/issues/38002 etc), implement Caesar cipher, change DNA to RNA etc.
:)
**Describe the solution you'd like**
Something similar to [`tr` operator in perl](https://perldoc.perl.org/perlop#tr/SEARCHLIST/REPLACEMENTLIST/cdsr)
Single pass through the string with all replaces applied.
```
Select transliterate(dna, 'ACGTacgt', 'TGCAtgca') AS rna
-- if first arg have some characters listed in the second arg, then they will be replaced by corresponding chars from the third arg
-- other characters are passed as is
```
Or more expressive (but more noisy) format with map
```
Select transliterate(dna,{ 'A' :'T', 'C' :'G',...}) AS rna
```
**Describe alternatives you've considered**
Sequence of replace calls - when aphabet is long (like base64) it looks ugly, and create a lot of copies of the string, so slow.
| https://github.com/ClickHouse/ClickHouse/issues/38833 | https://github.com/ClickHouse/ClickHouse/pull/38935 | 812143c76ba0ef50041f16c98c5c5c05c6fd4292 | cfe7413678e1cbaf7e4af8e150e578b7b536db6e | "2022-07-04T21:15:56Z" | c++ | "2022-07-14T00:31:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,784 | ["docs/en/sql-reference/functions/other-functions.md", "src/Functions/parseTimeDelta.cpp", "src/Functions/registerFunctionsFormatting.cpp", "tests/fuzz/all.dict", "tests/fuzz/dictionaries/functions.dict", "tests/queries/0_stateless/00534_filimonov.data", "tests/queries/0_stateless/02354_parse_timedelta.reference", "tests/queries/0_stateless/02354_parse_timedelta.sql"] | `parseTimeDelta` function | Opposite for `formatReadableTimeDelta`.
**Use case**
Parsing text inputs.
**Describe the solution you'd like**
It should parse a sequence of numbers followed by something resembling a time unit.
Examples:
```
1 min 35 sec
0m;11.23s.
11hr 25min 3.1s
0.00123 seconds
1yr2mo
11s+22min
```
Should return a floating-point number with the number of seconds.
(Subsequently, we can extend it by adding an optional argument describing the desired unit of result).
It is allowed to treat units like days and months as somewhat arbitrary lengths (the day will be always 86400 seconds regardless of DST and month will be e.g. 30.5 days, similarly to what `formatReadableTimeDelta` is doing).
The function will be named `parseTimeDelta`, not `parseReadableTimeDelta`, because it can be extended to parse something unreadable. | https://github.com/ClickHouse/ClickHouse/issues/38784 | https://github.com/ClickHouse/ClickHouse/pull/39071 | acb1fa53ed06a922143134c3c80f6d90e8e9f6b4 | 12221cffc9800de647080071dfc40d12fc069b4d | "2022-07-04T01:42:42Z" | c++ | "2022-07-19T10:39:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,772 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/RewriteOrderByVisitor.cpp", "src/Interpreters/RewriteOrderByVisitor.hpp", "src/Interpreters/TreeRewriter.cpp", "tests/performance/order_by_read_in_order.xml", "tests/queries/0_stateless/02353_order_by_tuple.reference", "tests/queries/0_stateless/02353_order_by_tuple.sh"] | ORDER BY with columns in braces | Performance of ORDER BY on sorting key prefix with braces is 2x slower than without due to inefficient execution plan.
Without braces:
```sql
EXPLAIN PIPELINE
SELECT
CounterID,
EventDate
FROM hits_v1
ORDER BY
CounterID ASC,
EventDate ASC
ββexplainββββββββββββββββββββββββββββ
β (Expression) β
β ExpressionTransform β
β (Sorting) β
β MergingSortedTransform 19 β 1 β
β (Expression) β
β ExpressionTransform Γ 19 β
β (ReadFromMergeTree) β
β MergeTreeInOrder Γ 19 0 β 1 β
βββββββββββββββββββββββββββββββββββββ
```
With braces:
```sql
EXPLAIN PIPELINE
SELECT
CounterID,
EventDate
FROM hits_v1
ORDER BY (CounterID, EventDate) ASC
ββexplainβββββββββββββββββββββββββββββββββ
β (Expression) β
β ExpressionTransform β
β (Sorting) β
β MergingSortedTransform 16 β 1 β
β MergeSortingTransform Γ 16 β
β LimitsCheckingTransform Γ 16 β
β PartialSortingTransform Γ 16 β
β (Expression) β
β ExpressionTransform Γ 16 β
β (ReadFromMergeTree) β
β MergeTreeThread Γ 16 0 β 1 β
ββββββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/38772 | https://github.com/ClickHouse/ClickHouse/pull/38873 | 8ffd6cd22cee62790d70fa60537b771fa43daa06 | fac42db3800a9396959609206da8144de747d948 | "2022-07-03T21:50:27Z" | c++ | "2022-07-07T17:16:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,756 | ["src/Client/ClientBase.cpp", "src/Common/EventRateMeter.h", "src/Common/ProgressIndication.cpp", "src/Common/ProgressIndication.h"] | CPU metric in clickhouse-client is wrong | CPU usage is misreported.
Currently "0 CPU" means "no CPU metric received in the previous ProfileEvents packet". But there can be many ProfileEvents packets and only a few with CPU increment.
That's why the CPU metric shows something weird like:
```
0
0
0
0
0
1234
0
0
0
...
```
This is unbearable, and the CPU metric should be either fixed or removed altogether. | https://github.com/ClickHouse/ClickHouse/issues/38756 | https://github.com/ClickHouse/ClickHouse/pull/39280 | 7146685f9d5a91207c103922288ffe5bb8f6bb94 | 499818751ef56e7b53bd36ad33fea884b7eb6cea | "2022-07-03T17:27:04Z" | c++ | "2022-07-19T13:17:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,741 | ["src/Interpreters/ExpressionAnalyzer.cpp", "tests/queries/0_stateless/02366_window_function_order_by.reference", "tests/queries/0_stateless/02366_window_function_order_by.sql"] | Not found column value in block, with WF over tuple | ```sql
SELECT groupArray(tuple(value)) OVER ()
FROM (select number value from numbers(10))
ORDER BY value ASC;
Received exception from server (version 22.7.1):
Code: 10. DB::Exception: Received from localhost:9000.
DB::Exception: Not found column value in block.
There are only columns: groupArray(tuple(value)) OVER (), tuple(value). (NOT_FOUND_COLUMN_IN_BLOCK)
```
without tuple is OK
```sql
SELECT groupArray(value) OVER ()
FROM (select number value from numbers(10))
ORDER BY value ASC;
ββgroupArray(value) OVER ()ββ
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
β [0,1,2,3,4,5,6,7,8,9] β
βββββββββββββββββββββββββββββ
```
```sql
SELECT value, groupArray(tuple(value)) OVER ()
FROM (select number value from numbers(10))
ORDER BY value ASC;
ββvalueββ¬βgroupArray(tuple(value)) OVER ()βββββββββββ
β 0 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 1 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 2 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 3 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 4 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 5 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 6 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 7 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 8 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
β 9 β [(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)] β
βββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/38741 | https://github.com/ClickHouse/ClickHouse/pull/39354 | 6a09340f1159f3e9a96dc4c01de4e7607b2173c6 | b01f5bdca8ed6e71b551234d94b861be299ceabc | "2022-07-02T20:44:49Z" | c++ | "2022-09-19T06:22:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,729 | ["tests/queries/0_stateless/02524_fuzz_and_fuss_2.reference", "tests/queries/0_stateless/02524_fuzz_and_fuss_2.sql"] | Block structure mismatch in Pipes stream: different types: | ```
CREATE TABLE default.data_a_02187
(
`a` Nullable(Int64)
)
ENGINE = Memory
INSERT INTO data_a_02187
SELECT *
FROM system.one
SETTINGS max_block_size = '1', min_insert_block_size_rows = '65536', min_insert_block_size_bytes = '0', max_insert_threads = '0', max_threads = '3', receive_timeout = '10', receive_data_timeout_ms = '10000', connections_with_failover_max_tries = '0', extremes = '1', use_uncompressed_cache = '0', optimize_move_to_prewhere = '1', optimize_move_to_prewhere_if_final = '0', replication_alter_partitions_sync = '2', totals_mode = 'before_having', allow_suspicious_low_cardinality_types = '1', compile_expressions = '1', min_count_to_compile_expression = '0', group_by_two_level_threshold = '100', distributed_aggregation_memory_efficient = '0', distributed_group_by_no_merge = '1', optimize_distributed_group_by_sharding_key = '1', optimize_skip_unused_shards = '1', optimize_skip_unused_shards_rewrite_in = '1', force_optimize_skip_unused_shards = '2', optimize_skip_unused_shards_nesting = '1', force_optimize_skip_unused_shards_nesting = '2', merge_tree_min_rows_for_concurrent_read = '10000', force_primary_key = '1', network_compression_method = 'ZSTD', network_zstd_compression_level = '7', log_queries = '0', log_queries_min_type = 'QUERY_FINISH', distributed_product_mode = 'local', insert_quorum = '2', insert_quorum_timeout = '0', insert_quorum_parallel = '0', select_sequential_consistency = '1', join_use_nulls = '1', any_join_distinct_right_table_keys = '1', preferred_max_column_in_block_size_bytes = '32', insert_distributed_sync = '1', insert_allow_materialized_columns = '1', use_index_for_in_with_subqueries = '1', joined_subquery_requires_alias = '0', empty_result_for_aggregation_by_empty_set = '1', allow_suspicious_codecs = '1', query_profiler_real_time_period_ns = '0', query_profiler_cpu_time_period_ns = '0', opentelemetry_start_trace_probability = '1', max_rows_to_read = '1000000', read_overflow_mode = 'break', max_rows_to_group_by = '10', group_by_overflow_mode = 'any', max_rows_to_sort = '100', sort_overflow_mode = 'break', max_result_rows = '10', max_execution_time = '3', max_execution_speed = '1', max_bytes_in_join = '100', join_algorithm = 'partial_merge', max_memory_usage = '1099511627776', log_query_threads = '1', send_logs_level = 'fatal', enable_optimize_predicate_expression = '1', prefer_localhost_replica = '1', optimize_read_in_order = '1', optimize_aggregation_in_order = '1', read_in_order_two_level_merge_threshold = '1', allow_introspection_functions = '1', check_query_single_value_result = '1', allow_experimental_live_view = '1', default_table_engine = 'Memory', mutations_sync = '2', convert_query_to_cnf = '0', optimize_arithmetic_operations_in_aggregate_functions = '1', optimize_duplicate_order_by_and_distinct = '0', optimize_multiif_to_if = '0', optimize_monotonous_functions_in_order_by = '1', optimize_functions_to_subcolumns = '1', optimize_using_constraints = '1', optimize_substitute_columns = '1', optimize_append_index = '1', transform_null_in = '1', allow_experimental_geo_types = '1', data_type_default_nullable = '1', cast_keep_nullable = '1', cast_ipv4_ipv6_default_on_conversion_error = '0', system_events_show_zero_values = '1', enable_global_with_statement = '1', optimize_on_insert = '0', optimize_rewrite_sum_if_to_count_if = '1', distributed_ddl_output_mode = 'throw', union_default_mode = 'ALL', optimize_aggregators_of_group_by_keys = '1', optimize_group_by_function_keys = '1', short_circuit_function_evaluation = 'enable', async_insert = '1', enable_filesystem_cache = '0', allow_deprecated_database_ordinary = '1', allow_deprecated_syntax_for_merge_tree = '1', allow_experimental_nlp_functions = '1', allow_experimental_object_type = '1', allow_experimental_map_type = '1', allow_experimental_projection_optimization = '1', input_format_null_as_default = '1', input_format_ipv4_default_on_conversion_error = '0', input_format_ipv6_default_on_conversion_error = '0', output_format_json_named_tuples_as_objects = '1', output_format_write_statistics = '0', output_format_pretty_row_numbers = '1'
```
```
2022.07.02 10:42:05.507212 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Debug> executeQuery: (from 127.0.0.1:51604) INSERT INTO data_a_02187 SELECT * FROM system.one (stage: Complete)
2022.07.02 10:42:05.507826 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: INSERT(a) ON default.data_a_02187
2022.07.02 10:42:05.509868 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2022.07.02 10:42:05.510473 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2022.07.02 10:42:05.514967 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(a) ON default.data_a_02187
2022.07.02 10:42:05.518543 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(a) ON default.data_a_02187
2022.07.02 10:42:05.521756 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(a) ON default.data_a_02187
2022.07.02 10:42:05.524943 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(a) ON default.data_a_02187
2022.07.02 10:42:05.528147 [ 4114448 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(a) ON default.data_a_02187
2022.07.02 10:42:05.543125 [ 4110340 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> ContextAccess (default): Access granted: SELECT(a) ON default.data_a_02187
2022.07.02 10:42:05.543744 [ 4110340 ] {19486459-764c-4c71-a015-178ed524cbe5} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2022.07.02 10:42:05.548398 [ 4110340 ] {19486459-764c-4c71-a015-178ed524cbe5} <Fatal> : Logical error: 'Block structure mismatch in Pipes stream: different types:
a Nullable(Int64) Nullable(size = 0, Int64(size = 0), UInt8(size = 0))
a UInt8 UInt8(size = 0)'.
2022.07.02 10:42:05.550510 [ 4114616 ] {} <Fatal> BaseDaemon: ########################################
2022.07.02 10:42:05.550847 [ 4114616 ] {} <Fatal> BaseDaemon: (version 22.7.1.1, build id: F83784D0D967DB9C) (from thread 4110340) (query_id: 19486459-764c-4c71-a015-178ed524cbe5) (query: INSERT INTO data_a_02187 SELECT * FROM system.one) Received signal Aborted (6)
2022.07.02 10:42:05.551125 [ 4114616 ] {} <Fatal> BaseDaemon:
2022.07.02 10:42:05.551440 [ 4114616 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f048d55203b 0x7f048d531859 0x16e8cce6 0x16e8cd75 0x16e8ce25 0x250f2f31 0x250ec46a 0x250f1106 0x250f0dc0 0x2539048b 0x2538f78a 0x253c4ff8 0x27e20cad 0x27e207a5 0x27d23824 0x27d237b5 0x27d2377d 0x27d23755 0x27d2371d 0x16ef3a66 0x16ef0015 0x27d22eee 0x27d22a07 0x2795fea3 0x2795fadb 0x27943081 0x279433b7 0x27942532 0x27941ad8 0x27940911 0x279407f6 0x27940795 0x27940741 0x27940652 0x2794053b 0x27940415 0x279403dd 0x279403b5 0x27940380 0x16ef3a66 0x16ef0015
2022.07.02 10:42:05.551748 [ 4114616 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x7f048d55203b in ?
2022.07.02 10:42:05.551935 [ 4114616 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7f048d531859 in ?
2022.07.02 10:42:05.661918 [ 4114616 ] {} <Fatal> BaseDaemon: 6. /build/build_docker/../src/Common/Exception.cpp:40: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x16e8cce6 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:05.771959 [ 4114616 ] {} <Fatal> BaseDaemon: 7. /build/build_docker/../src/Common/Exception.cpp:63: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x16e8cd75 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:05.864909 [ 4114616 ] {} <Fatal> BaseDaemon: 8. /build/build_docker/../src/Common/Exception.cpp:70: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x16e8ce25 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:06.097172 [ 4114616 ] {} <Fatal> BaseDaemon: 9. /build/build_docker/../src/Core/Block.cpp:35: void DB::onError<void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x250f2f31 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:06.319279 [ 4114616 ] {} <Fatal> BaseDaemon: 10. /build/build_docker/../src/Core/Block.cpp:51: void DB::checkColumnStructure<void>(DB::ColumnWithTypeAndName const&, DB::ColumnWithTypeAndName const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool, int) @ 0x250ec46a in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:06.542026 [ 4114616 ] {} <Fatal> BaseDaemon: 11. /build/build_docker/../src/Core/Block.cpp:98: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool) @ 0x250f1106 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:06.766111 [ 4114616 ] {} <Fatal> BaseDaemon: 12. /build/build_docker/../src/Core/Block.cpp:665: DB::assertBlocksHaveEqualStructure(DB::Block const&, DB::Block const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >) @ 0x250f0dc0 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:06.982917 [ 4114616 ] {} <Fatal> BaseDaemon: 13. /build/build_docker/../src/QueryPipeline/Pipe.cpp:502: DB::Pipe::addTransform(std::__1::shared_ptr<DB::IProcessor>, DB::OutputPort*, DB::OutputPort*) @ 0x2539048b in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:07.202687 [ 4114616 ] {} <Fatal> BaseDaemon: 14. /build/build_docker/../src/QueryPipeline/Pipe.cpp:426: DB::Pipe::addTransform(std::__1::shared_ptr<DB::IProcessor>) @ 0x2538f78a in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:07.368829 [ 4114616 ] {} <Fatal> BaseDaemon: 15. /build/build_docker/../src/QueryPipeline/QueryPipelineBuilder.cpp:131: DB::QueryPipelineBuilder::addTransform(std::__1::shared_ptr<DB::IProcessor>) @ 0x253c4ff8 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:07.682884 [ 4114616 ] {} <Fatal> BaseDaemon: 16. /build/build_docker/../src/Processors/Transforms/buildPushingToViewsChain.cpp:447: DB::process(DB::Block, DB::ViewRuntimeData&, DB::ViewsData const&) @ 0x27e20cad in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:07.956874 [ 4114616 ] {} <Fatal> BaseDaemon: 17. /build/build_docker/../src/Processors/Transforms/buildPushingToViewsChain.cpp:552: DB::ExecutingInnerQueryFromViewTransform::onConsume(DB::Chunk) @ 0x27e207a5 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.121928 [ 4114616 ] {} <Fatal> BaseDaemon: 18. /build/build_docker/../src/Processors/Transforms/ExceptionKeepingTransform.cpp:151: DB::ExceptionKeepingTransform::work()::$_1::operator()() const @ 0x27d23824 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.283033 [ 4114616 ] {} <Fatal> BaseDaemon: 19. /build/build_docker/../contrib/libcxx/include/type_traits:3640: decltype(static_cast<DB::ExceptionKeepingTransform::work()::$_1&>(fp)()) std::__1::__invoke<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) @ 0x27d237b5 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.440691 [ 4114616 ] {} <Fatal> BaseDaemon: 20. /build/build_docker/../contrib/libcxx/include/__functional/invoke.h:62: void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) @ 0x27d2377d in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.595266 [ 4114616 ] {} <Fatal> BaseDaemon: 21. /build/build_docker/../contrib/libcxx/include/__functional/function.h:230: std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>::operator()() @ 0x27d23755 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.749953 [ 4114616 ] {} <Fatal> BaseDaemon: 22. /build/build_docker/../contrib/libcxx/include/__functional/function.h:711: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()> >(std::__1::__function::__policy_storage const*) @ 0x27d2371d in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.820121 [ 4114616 ] {} <Fatal> BaseDaemon: 23. /build/build_docker/../contrib/libcxx/include/__functional/function.h:843: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x16ef3a66 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:08.883311 [ 4114616 ] {} <Fatal> BaseDaemon: 24. /build/build_docker/../contrib/libcxx/include/__functional/function.h:1184: std::__1::function<void ()>::operator()() const @ 0x16ef0015 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.046586 [ 4114616 ] {} <Fatal> BaseDaemon: 25. /build/build_docker/../src/Processors/Transforms/ExceptionKeepingTransform.cpp:115: DB::runStep(std::__1::function<void ()>, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) @ 0x27d22eee in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.174018 [ 4114616 ] {} <Fatal> BaseDaemon: 26. /build/build_docker/../src/Processors/Transforms/ExceptionKeepingTransform.cpp:151: DB::ExceptionKeepingTransform::work() @ 0x27d22a07 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.272695 [ 4114616 ] {} <Fatal> BaseDaemon: 27. /build/build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x2795fea3 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.366491 [ 4114616 ] {} <Fatal> BaseDaemon: 28. /build/build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:86: DB::ExecutionThreadContext::executeTask() @ 0x2795fadb in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.531262 [ 4114616 ] {} <Fatal> BaseDaemon: 29. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:222: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x27943081 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.698793 [ 4114616 ] {} <Fatal> BaseDaemon: 30. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:187: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x279433b7 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:09.841002 [ 4114616 ] {} <Fatal> BaseDaemon: 31. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:331: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x27942532 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.009226 [ 4114616 ] {} <Fatal> BaseDaemon: 32. /build/build_docker/../src/Processors/Executors/PipelineExecutor.cpp:88: DB::PipelineExecutor::execute(unsigned long) @ 0x27941ad8 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.113891 [ 4114616 ] {} <Fatal> BaseDaemon: 33. /build/build_docker/../src/Processors/Executors/CompletedPipelineExecutor.cpp:43: DB::threadFunction(DB::CompletedPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x27940911 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.214769 [ 4114616 ] {} <Fatal> BaseDaemon: 34. /build/build_docker/../src/Processors/Executors/CompletedPipelineExecutor.cpp:78: DB::CompletedPipelineExecutor::execute()::$_0::operator()() const @ 0x279407f6 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.315765 [ 4114616 ] {} <Fatal> BaseDaemon: 35. /build/build_docker/../contrib/libcxx/include/type_traits:3648: decltype(static_cast<DB::CompletedPipelineExecutor::execute()::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::CompletedPipelineExecutor::execute()::$_0&>(DB::CompletedPipelineExecutor::execute()::$_0&) @ 0x27940795 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.416778 [ 4114616 ] {} <Fatal> BaseDaemon: 36. /build/build_docker/../contrib/libcxx/include/tuple:1595: decltype(auto) std::__1::__apply_tuple_impl<DB::CompletedPipelineExecutor::execute()::$_0&, std::__1::tuple<>&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x27940741 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.518627 [ 4114616 ] {} <Fatal> BaseDaemon: 37. /build/build_docker/../contrib/libcxx/include/tuple:1604: decltype(auto) std::__1::apply<DB::CompletedPipelineExecutor::execute()::$_0&, std::__1::tuple<>&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::__1::tuple<>&) @ 0x27940652 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.607312 [ 4114616 ] {} <Fatal> BaseDaemon: 38. /build/build_docker/../src/Common/ThreadPool.h:188: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'()::operator()() @ 0x2794053b in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.707866 [ 4114616 ] {} <Fatal> BaseDaemon: 39. /build/build_docker/../contrib/libcxx/include/type_traits:3640: decltype(static_cast<DB::CompletedPipelineExecutor::execute()::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'()&>(DB::CompletedPipelineExecutor::execute()::$_0&&) @ 0x27940415 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.807898 [ 4114616 ] {} <Fatal> BaseDaemon: 40. /build/build_docker/../contrib/libcxx/include/__functional/invoke.h:62: void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'()&>(ThreadFromGlobalPool::ThreadFromGlobalPool<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'()&) @ 0x279403dd in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:10.907973 [ 4114616 ] {} <Fatal> BaseDaemon: 41. /build/build_docker/../contrib/libcxx/include/__functional/function.h:230: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'(), void ()>::operator()() @ 0x279403b5 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:11.015065 [ 4114616 ] {} <Fatal> BaseDaemon: 42. /build/build_docker/../contrib/libcxx/include/__functional/function.h:711: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x27940380 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:11.096869 [ 4114616 ] {} <Fatal> BaseDaemon: 43. /build/build_docker/../contrib/libcxx/include/__functional/function.h:843: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x16ef3a66 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:11.160153 [ 4114616 ] {} <Fatal> BaseDaemon: 44. /build/build_docker/../contrib/libcxx/include/__functional/function.h:1184: std::__1::function<void ()>::operator()() const @ 0x16ef0015 in /home/ubuntu/debug-ch/clickhouse
2022.07.02 10:42:12.396001 [ 4114616 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read. (calculated checksum: 393D0FA4D713737C40FCCA46C263AAF4)
``` | https://github.com/ClickHouse/ClickHouse/issues/38729 | https://github.com/ClickHouse/ClickHouse/pull/45035 | 2254855f09256708fbec70ce545ad7e7e1f4aeac | 0eaf05a2dd246fc6fee4e58ba78d3e9f51dac40c | "2022-07-02T11:08:25Z" | c++ | "2023-01-08T07:22:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,671 | ["src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/integration/test_postgresql_replica_database_engine_2/test.py"] | Replication from PostgreSQL to CH with MaterializedPostgreSQL database engine stopped after materialized view created | When create materialized view on ClickHouse table in materialized PostgreSQL database, replication process not update or insert data into this table. After remove MV, replication resumes, but data lossed.
ClickHouse version: 22.6.1.1399
Postgres version: PostgreSQL 12.11 (Ubuntu 12.11-0ubuntu0.20.04.1)
**Postgres:**
```
CREATE TABLE test.test_table (
id int4 NOT NULL,
val int4 NOT NULL,
CONSTRAINT tt_pk PRIMARY KEY (id)
);
INSERT INTO test.test_table
(id, val)
VALUES(1, 1);
INSERT INTO test.test_table
(id, val)
VALUES(2, 2);
SELECT * FROM test.test_table;
id|val|
--+---+
1| 1|
2| 2|
```
**ClickHouse:**
```
CREATE DATABASE mpsql_test
ENGINE = MaterializedPostgreSQL
(
'localhost:5432', 'test_db', 'postgres', 'postgres'
)
SETTINGS
materialized_postgresql_schema = 'test',
materialized_postgresql_tables_list = 'test_table';
SHOW TABLES FROM mpsql_test;
name |
----------+
test_table|
SELECT * FROM mpsql_test.test_table;
id|val|
--+---+
1| 1|
2| 2|
```
**Postgres:**
```
INSERT INTO test.test_table
(id, val)
VALUES(3, 3);
SELECT * FROM test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
```
**ClickHouse:**
```
SELECT * FROM mpsql_test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
CREATE DATABASE test_db;
CREATE MATERIALIZED VIEW test_db.test_view
ENGINE = MergeTree()
ORDER BY (id)
AS SELECT * from mpsql_test.test_table;
SELECT * FROM test_db.test_view;
id|val|
--+---+
```
**Postgres:**
```
INSERT INTO test.test_table
(id, val)
VALUES(4, 4);
INSERT INTO test.test_table
(id, val)
VALUES(5, 5);
SELECT * FROM test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
4| 4|
5| 5|
```
**ClickHouse:**
```
SELECT * FROM mpsql_test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
DROP VIEW test_db.test_view;
SELECT * FROM mpsql_test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
```
**Postgres:**
```
INSERT INTO test.test_table
(id, val)
VALUES(6, 6);
INSERT INTO test.test_table
(id, val)
VALUES(7, 7);
SELECT * FROM test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
4| 4|
5| 5|
6| 6|
7| 7|
```
**ClickHouse:**
```
SELECT * FROM mpsql_test.test_table;
id|val|
--+---+
1| 1|
2| 2|
3| 3|
6| 6|
7| 7|
``` | https://github.com/ClickHouse/ClickHouse/issues/38671 | https://github.com/ClickHouse/ClickHouse/pull/40807 | 2c0395b346e49f3426324d899b84c8ceba25f407 | fc619d06ba76fa20ea741c599aea597bcdbbaa2a | "2022-07-01T08:12:56Z" | c++ | "2023-02-28T14:10:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,666 | ["tests/queries/0_stateless/02786_transform_float.reference", "tests/queries/0_stateless/02786_transform_float.sql", "tests/queries/0_stateless/02787_transform_null.reference", "tests/queries/0_stateless/02787_transform_null.sql"] | CASE operator works incorrectly | ClickHouse version 22.3.3.44
When I use CASE operator with redundant section ELSE, it works as expected,
but if section ELSE is absent, operator always return only NULL value.
```
select
case 1
when 1 then 'a'
else 'b'
end value
```
```
value
-----
a
```
```
select
case 1
when 1 then 'a'
end value
```
```
value
-----
<NULL>
``` | https://github.com/ClickHouse/ClickHouse/issues/38666 | https://github.com/ClickHouse/ClickHouse/pull/50833 | 572c9dec4e81dc323e98e0d87eaafcf8c863d843 | 826dfe4467a63187fa897e841659ec2a7ff3de5e | "2022-07-01T04:06:57Z" | c++ | "2023-06-10T17:07:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,629 | ["docker/test/stress/run.sh", "src/Storages/MergeTree/DataPartStorageOnDisk.cpp", "tests/queries/0_stateless/01701_clear_projection_and_part_remove.reference", "tests/queries/0_stateless/01701_clear_projection_and_part_remove.sql"] | BC check: Cannot quickly remove directory ... by removing files | https://s3.amazonaws.com/clickhouse-test-reports/0/bd4a208428c8f17c4c3148886e6bf008bb064f58/stress_test__memory__actions_.html
```
2022.06.30 05:04:05.434426 [ 461303 ] {} <Error> test_35.tp_1 (1ff2034b-af50-401f-90fd-ac714bb9d202): Cannot quickly remove directory /var/lib/clickhouse/store/1ff/1ff2034b-af50-401f-90fd-ac714bb9d202/delete_tmp_all_0_0_0_2 by removing files; fallback to recursive removal. Reason: Code: 458. DB::ErrnoException: Cannot unlink file /var/lib/clickhouse/store/1ff/1ff2034b-af50-401f-90fd-ac714bb9d202/delete_tmp_all_0_0_0_2/pp.proj, errno: 21, strerror: Is a directory. (CANNOT_UNLINK) (version 22.7.1.1 (official build))
2022.06.30 05:04:05.434439 [ 461316 ] {} <Error> test_35.tp_2 (2c15d3ff-aa27-4370-b90d-fa14236c603b): Cannot quickly remove directory /var/lib/clickhouse/store/2c1/2c15d3ff-aa27-4370-b90d-fa14236c603b/delete_tmp_all_0_0_0_2 by removing files; fallback to recursive removal. Reason: Code: 458. DB::ErrnoException: Cannot unlink file /var/lib/clickhouse/store/2c1/2c15d3ff-aa27-4370-b90d-fa14236c603b/delete_tmp_all_0_0_0_2/pp.proj, errno: 21, strerror: Is a directory. (CANNOT_UNLINK) (version 22.7.1.1 (official build))
2022.06.30 05:04:05.436398 [ 461303 ] {} <Error> test_35.tp_1 (1ff2034b-af50-401f-90fd-ac714bb9d202): Cannot quickly remove directory /var/lib/clickhouse/store/1ff/1ff2034b-af50-401f-90fd-ac714bb9d202/delete_tmp_all_1_1_0_2 by removing files; fallback to recursive removal. Reason: Code: 458. DB::ErrnoException: Cannot unlink file /var/lib/clickhouse/store/1ff/1ff2034b-af50-401f-90fd-ac714bb9d202/delete_tmp_all_1_1_0_2/pp.proj, errno: 21, strerror: Is a directory. (CANNOT_UNLINK) (version 22.7.1.1 (official build))
2022.06.30 05:04:05.436486 [ 461316 ] {} <Error> test_35.tp_2 (2c15d3ff-aa27-4370-b90d-fa14236c603b): Cannot quickly remove directory /var/lib/clickhouse/store/2c1/2c15d3ff-aa27-4370-b90d-fa14236c603b/delete_tmp_all_1_1_0_2 by removing files; fallback to recursive removal. Reason: Code: 458. DB::ErrnoException: Cannot unlink file /var/lib/clickhouse/store/2c1/2c15d3ff-aa27-4370-b90d-fa14236c603b/delete_tmp_all_1_1_0_2/pp.proj, errno: 21, strerror: Is a directory. (CANNOT_UNLINK) (version 22.7.1.1 (official build))
```
Seems like the issue was introduced in #36555 | https://github.com/ClickHouse/ClickHouse/issues/38629 | https://github.com/ClickHouse/ClickHouse/pull/39119 | 69de9ee0e83ecfad68465b2cfbab7e03aaf73c21 | 8efbe6d44d049fa200c12c045f617b7a2ef1b9ac | "2022-06-30T13:32:36Z" | c++ | "2022-07-12T14:24:30Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,618 | ["src/Access/ContextAccess.cpp", "tests/integration/test_access_control_on_cluster/test.py"] | GRANT/REVOKE with ON CLUSTER causes Segmentation fault | When you run GRANT with ON CLUSTER, every server in the cluster will crash.
Version 22.6.2.12.
I used the official docker image to create a small cluster. You can see the config here and try it out with docker-compose: https://github.com/maybedino/test-clickhouse
Then log in with the default user and try to add another user:
```sql
create user testuser on cluster testcluster identified with sha256_password by 'test'
```
```sql
grant on cluster testcluster all on *.* to testuser with grant option
```
GRANT will crash every server in the cluster:
```
2022.06.30 08:18:38.865561 [ 273 ] {} <Fatal> BaseDaemon: ########################################
2022.06.30 08:18:38.865712 [ 273 ] {} <Fatal> BaseDaemon: (version 22.6.2.12 (official build), build id: 52AFD84A0FEDD1BA) (from thread 262) (query_id: 6469e16f-b37f-4565-a9f7-e8e023c1632e) (query: /* ddl_entry=query-0000000001 */ GRANT ALL ON *.* TO testuser WITH GRANT OPTION) Received signal Segmentation fault (11)
2022.06.30 08:18:38.865769 [ 273 ] {} <Fatal> BaseDaemon: Address: 0x16b Access: read. Address not mapped to object.
2022.06.30 08:18:38.865809 [ 273 ] {} <Fatal> BaseDaemon: Stack trace: 0x1541b8dd 0x1659fa4b 0x16566556 0x1656a5c4 0x15cdcb49 0x15cdb557 0x15cd94d3 0x15cd3405 0x15ce6ff6 0xb94d077 0xb95049d 0x7f0096c6a609 0x7f0096b8f133
2022.06.30 08:18:38.865902 [ 273 ] {} <Fatal> BaseDaemon: 2. bool DB::ContextAccess::checkAccessImplHelper<true, true>(DB::AccessFlags) const @ 0x1541b8dd in /usr/bin/clickhouse
2022.06.30 08:18:38.866029 [ 273 ] {} <Fatal> BaseDaemon: 3. DB::InterpreterGrantQuery::execute() @ 0x1659fa4b in /usr/bin/clickhouse
2022.06.30 08:18:38.866053 [ 273 ] {} <Fatal> BaseDaemon: 4. ? @ 0x16566556 in /usr/bin/clickhouse
2022.06.30 08:18:38.866088 [ 273 ] {} <Fatal> BaseDaemon: 5. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x1656a5c4 in /usr/bin/clickhouse
2022.06.30 08:18:38.866120 [ 273 ] {} <Fatal> BaseDaemon: 6. DB::DDLWorker::tryExecuteQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x15cdcb49 in /usr/bin/clickhouse
2022.06.30 08:18:38.866140 [ 273 ] {} <Fatal> BaseDaemon: 7. DB::DDLWorker::processTask(DB::DDLTaskBase&, std::__1::shared_ptr<zkutil::ZooKeeper> const&) @ 0x15cdb557 in /usr/bin/clickhouse
2022.06.30 08:18:38.866155 [ 273 ] {} <Fatal> BaseDaemon: 8. DB::DDLWorker::scheduleTasks(bool) @ 0x15cd94d3 in /usr/bin/clickhouse
2022.06.30 08:18:38.866170 [ 273 ] {} <Fatal> BaseDaemon: 9. DB::DDLWorker::runMainThread() @ 0x15cd3405 in /usr/bin/clickhouse
2022.06.30 08:18:38.866190 [ 273 ] {} <Fatal> BaseDaemon: 10. ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'()::operator()() @ 0x15ce6ff6 in /usr/bin/clickhouse
2022.06.30 08:18:38.866209 [ 273 ] {} <Fatal> BaseDaemon: 11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb94d077 in /usr/bin/clickhouse
2022.06.30 08:18:38.866223 [ 273 ] {} <Fatal> BaseDaemon: 12. ? @ 0xb95049d in /usr/bin/clickhouse
2022.06.30 08:18:38.866245 [ 273 ] {} <Fatal> BaseDaemon: 13. ? @ 0x7f0096c6a609 in ?
2022.06.30 08:18:38.866259 [ 273 ] {} <Fatal> BaseDaemon: 14. clone @ 0x7f0096b8f133 in ?
2022.06.30 08:18:38.993700 [ 273 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 894C4AAB85FCB9AAC26136BC446CC5AF)
/entrypoint.sh: line 155: 43 Segmentation fault (core dumped) /usr/bin/clickhouse su "${USER}:${GROUP}" /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" "$@"
```
And then you can't start the cluster anymore because it stays in the task queue and keeps crashing the server. You need to manually stop the server and remove the tasks from clickhouse-keeper. (Or just delete the volumes in this test cluster.)
The same thing happens with REVOKE ... ON CLUSTER ....
Running GRANT without ON CLUSTER on every server individually works fine.
I ran it once with crash reports enabled, not sure if it worked.
| https://github.com/ClickHouse/ClickHouse/issues/38618 | https://github.com/ClickHouse/ClickHouse/pull/38674 | c4155433349cdab8ff54584a982076030587e29e | b4103c1a0e02177b618de5905f3a82de7b432466 | "2022-06-30T08:43:02Z" | c++ | "2022-07-04T08:13:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,611 | ["src/Functions/isNullable.cpp", "src/Functions/registerFunctionsNull.cpp", "tests/queries/0_stateless/02353_isnullable.reference", "tests/queries/0_stateless/02353_isnullable.sql"] | isNullable function | **Use case**
Is that column nullable or not
**Describe the solution you'd like**
SELECT isNullable(NULL);
1 -- or true
**Describe alternatives you've considered**
```
WITH materialize(NULL) AS x
SELECT match(toColumnTypeName(x), 'Nullable') AS res
Query id: fa279bf2-38d5-41ae-89cb-187e588908db
ββresββ
β 1 β
βββββββ
```
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/38611 | https://github.com/ClickHouse/ClickHouse/pull/38841 | fa1d5b8bdbc2113521d9c113cb1b146d1e32cd2c | 7c60714b4b5e6507ed0ca74887932ffc1496cd60 | "2022-06-29T22:57:56Z" | c++ | "2022-07-07T09:49:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,585 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/01556_accurate_cast_or_null.reference", "tests/queries/0_stateless/01556_accurate_cast_or_null.sql", "tests/queries/0_stateless/01601_accurate_cast.reference", "tests/queries/0_stateless/01601_accurate_cast.sql", "tests/queries/0_stateless/02026_accurate_cast_or_default.reference", "tests/queries/0_stateless/02026_accurate_cast_or_default.sql", "tests/queries/0_stateless/02303_cast_nullable_to_custom_types.reference", "tests/queries/0_stateless/02303_cast_nullable_to_custom_types.sql"] | accurateCastOrNull raises an exception, rather than simply returning null when casting an invalid value to Bool | **Describe what's wrong**
accurateCastOrNull raises an exception, rather than simply returning null when casting an invalid value to Bool
```
Code: 467. DB::Exception: Received from 127.0.0.1:9000. DB::Exception: Cannot parse boolean value here: 'test', should be 'true' or 'false' controlled by setting bool_true_representation and bool_false_representation or one of True/False/T/F/Y/N/Yes/No/On/Off/Enable/Disable/Enabled/Disabled/1/0: While processing accurateCastOrNull('test', 'Bool'). (CANNOT_PARSE_BOOL)
```
**Does it reproduce on recent release?**
ClickHouse server version 22.7.1.906 (official build).
**How to reproduce**
```
select accurateCastOrNull('test', 'Bool')
```
**Expected behavior**
```
I expect it to behave the same as an invalid cast to something like UInt8
SELECT accurateCastOrNull('test', 'UInt8')
Query id: e7d0473b-6ae0-455d-9eae-fb3827e7c671
ββaccurateCastOrNull('test', 'UInt8')ββ
β α΄Ία΅α΄Έα΄Έ β
βββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/38585 | https://github.com/ClickHouse/ClickHouse/pull/54629 | bee9eb5df47091229c149755dbd0b05966317e88 | 975f954a26b5cd9a7fae7c6ebd2a6d8f0b67f7f8 | "2022-06-29T15:33:59Z" | c++ | "2023-10-25T08:31:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,560 | ["src/Processors/Executors/ExecutingGraph.cpp", "src/Processors/Executors/ExecutingGraph.h"] | Hung Check: PullingAsyncPipelineExecutor::cancel, fibers, QueryProfiler, libunwind | https://s3.amazonaws.com/clickhouse-test-reports/38541/9b4da86e2e5cf74a8b5b1205bbabc909e357d654/stress_test__debug__actions_.html
```
{
"is_initial_query": 1,
"user": "default",
"query_id": "64ae3520-41dc-45e9-93b7-bf9d986c4ca7",
"address": "::1",
"port": 41342,
"initial_user": "default",
"initial_query_id": "64ae3520-41dc-45e9-93b7-bf9d986c4ca7",
"initial_address": "::1",
"initial_port": 41342,
"interface": 1,
"os_user": "",
"client_hostname": "c801a6add6e2",
"client_name": "ClickHouse",
"client_revision": "54456",
"client_version_major": "22",
"client_version_minor": "7",
"client_version_patch": "1",
"http_method": 0,
"http_user_agent": "",
"http_referer": "",
"forwarded_for": "",
"quota_key": "",
"distributed_depth": "0",
"elapsed": 2595.502542313,
"is_cancelled": 0,
"is_all_data_sent": 0,
"read_rows": "0",
"read_bytes": "0",
"total_rows_approx": "0",
"written_rows": "0",
"written_bytes": "0",
"memory_usage": "13988229",
"peak_memory_usage": "16085723",
"query": "SELECT sleepEachRow(1) FROM remote('127.{2..21}', system.one)",
"thread_ids": [
"1079",
"1282",
"1514",
"1516",
"1526",
"1533",
"1524",
"1532",
"1533",
"1522",
"1528",
"1568",
"1648",
"1658",
"2473",
"2474",
"1537",
"1543",
"1546",
"1550",
"1541",
"1553"
],
```
```
Thread 293 (Thread 0x7fe388bc4700 (LWP 1079)):
#0 0x00007fe562183376 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x000000002bec0c9d in Poco::EventImpl::waitImpl (this=0x7fe41a901118) at ../contrib/poco/Foundation/src/Event_POSIX.cpp:106
#2 0x0000000016fa0915 in Poco::Event::wait (this=0x7fe41a901118) at ../contrib/poco/Foundation/include/Poco/Event.h:97
#3 0x0000000016f9eeb7 in ThreadFromGlobalPool::join (this=0x7fe413b1f9a8) at ../src/Common/ThreadPool.h:217
#4 0x0000000027939183 in DB::PullingAsyncPipelineExecutor::cancel (this=0x7fe388bb55b0) at ../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:182
#5 0x000000002789b9ea in DB::TCPHandler::processOrdinaryQueryWithProcessors (this=0x7fe40f3f7200) at ../src/Server/TCPHandler.cpp:700
#6 0x000000002789557f in DB::TCPHandler::runImpl (this=0x7fe40f3f7200) at ../src/Server/TCPHandler.cpp:344
#7 0x00000000278a4065 in DB::TCPHandler::run (this=0x7fe40f3f7200) at ../src/Server/TCPHandler.cpp:1797
#8 0x000000002bd06d19 in Poco::Net::TCPServerConnection::start (this=0x7fe40f3f7200) at ../contrib/poco/Net/src/TCPServerConnection.cpp:43
#9 0x000000002bd07526 in Poco::Net::TCPServerDispatcher::run (this=0x7fe55f0e1400) at ../contrib/poco/Net/src/TCPServerDispatcher.cpp:115
#10 0x000000002bf4b474 in Poco::PooledThread::run (this=0x7fe45629f900) at ../contrib/poco/Foundation/src/ThreadPool.cpp:199
#11 0x000000002bf47ffa in Poco::(anonymous namespace)::RunnableHolder::run (this=0x7fe4563de1c0) at ../contrib/poco/Foundation/src/Thread.cpp:55
#12 0x000000002bf46dde in Poco::ThreadImpl::runnableEntry (pThread=0x7fe45629f938) at ../contrib/poco/Foundation/src/Thread_POSIX.cpp:345
#13 0x00007fe56217c609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#14 0x00007fe5620a1133 in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 342 (Thread 0x7fe36f391700 (LWP 1514)):
#0 0x000000002f9e8db8 in libunwind::LocalAddressSpace::get32 (this=0x30a8eac0 <libunwind::LocalAddressSpace::sThisAddressSpace>, addr=322435816) at ../contrib/libunwind/src/AddressSpace.hpp:172
#1 0x000000002f9eaa53 in libunwind::CFI_Parser<libunwind::LocalAddressSpace>::findFDE (addressSpace=..., pc=760373582, ehSectionStart=307351848, sectionLength=491596760, fdeHint=0, fdeInfo=0x7fe2d86d95b8, cieInfo=0x7fe2d86d9580) at ../contrib/libunwind/src/DwarfParser.hpp:244
#2 0x000000002f9ea5c7 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::getInfoFromDwarfSection (this=0x7fe2d86d97c8, pc=760373582, sects=..., fdeSectionOffsetHint=0) at ../contrib/libunwind/src/UnwindCursor.hpp:1653
#3 0x000000002f9e5b4b in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::setInfoBasedOnIPRegister (this=0x7fe2d86d97c8, isReturnAddress=true) at ../contrib/libunwind/src/UnwindCursor.hpp:2561
#4 0x000000002f9e5972 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::step (this=0x7fe2d86d97c8) at ../contrib/libunwind/src/UnwindCursor.hpp:2824
#5 0x000000002f9e44d1 in __unw_step (cursor=0x7fe2d86d97c8) at ../contrib/libunwind/src/libunwind.cpp:180
#6 0x000000002f9e4a05 in unw_backtrace (buffer=0x7fe2d86d9a28, size=45) at ../contrib/libunwind/src/libunwind.cpp:350
#7 0x0000000016ebd051 in StackTrace::tryCapture (this=0x7fe2d86d9a18) at ../src/Common/StackTrace.cpp:305
#8 0x0000000016ebd0c0 in StackTrace::StackTrace (this=0x7fe2d86d9a18, signal_context=...) at ../src/Common/StackTrace.cpp:271
#9 0x0000000016f031ae in DB::(anonymous namespace)::writeTraceInfo (trace_type=DB::TraceType::CPU, info=0x7fe2d86da0f0, context=0x7fe2d86d9fc0) at ../src/Common/QueryProfiler.cpp:67
#10 0x0000000016f02fe6 in DB::QueryProfilerCPU::signalHandler (sig=12, info=0x7fe2d86da0f0, context=0x7fe2d86d9fc0) at ../src/Common/QueryProfiler.cpp:225
#11 <signal handler called>
#12 0x000000002f9e8db8 in libunwind::LocalAddressSpace::get32 (this=0x30a8eac0 <libunwind::LocalAddressSpace::sThisAddressSpace>, addr=348609220) at ../contrib/libunwind/src/AddressSpace.hpp:172
#13 0x000000002f9eaab2 in libunwind::CFI_Parser<libunwind::LocalAddressSpace>::findFDE (addressSpace=..., pc=760373582, ehSectionStart=307351848, sectionLength=491596760, fdeHint=0, fdeInfo=0x7fe2d86da678, cieInfo=0x7fe2d86da640) at ../contrib/libunwind/src/DwarfParser.hpp:253
#14 0x000000002f9ea5c7 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::getInfoFromDwarfSection (this=0x7fe2d86da888, pc=760373582, sects=..., fdeSectionOffsetHint=0) at ../contrib/libunwind/src/UnwindCursor.hpp:1653
#15 0x000000002f9e5b4b in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::setInfoBasedOnIPRegister (this=0x7fe2d86da888, isReturnAddress=true) at ../contrib/libunwind/src/UnwindCursor.hpp:2561
#16 0x000000002f9e5972 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::step (this=0x7fe2d86da888) at ../contrib/libunwind/src/UnwindCursor.hpp:2824
#17 0x000000002f9e44d1 in __unw_step (cursor=0x7fe2d86da888) at ../contrib/libunwind/src/libunwind.cpp:180
#18 0x000000002f9e4a05 in unw_backtrace (buffer=0x7fe2d86daae8, size=45) at ../contrib/libunwind/src/libunwind.cpp:350
#19 0x0000000016ebd051 in StackTrace::tryCapture (this=0x7fe2d86daad8) at ../src/Common/StackTrace.cpp:305
#20 0x0000000016ebd0c0 in StackTrace::StackTrace (this=0x7fe2d86daad8, signal_context=...) at ../src/Common/StackTrace.cpp:271
#21 0x0000000016f031ae in DB::(anonymous namespace)::writeTraceInfo (trace_type=DB::TraceType::Real, info=0x7fe2d86db1b0, context=0x7fe2d86db080) at ../src/Common/QueryProfiler.cpp:67
#22 0x0000000016f02f03 in DB::QueryProfilerReal::signalHandler (sig=10, info=0x7fe2d86db1b0, context=0x7fe2d86db080) at ../src/Common/QueryProfiler.cpp:212
#23 <signal handler called>
#24 libunwind::LocalAddressSpace::getEncodedP (this=0x30a8eac0 <libunwind::LocalAddressSpace::sThisAddressSpace>, addr=@0x7fe2d86db700: 315245532, end=315245552, encoding=27 '\033', datarelBase=0) at ../contrib/libunwind/src/AddressSpace.hpp:323
#25 0x000000002f9eab5b in libunwind::CFI_Parser<libunwind::LocalAddressSpace>::findFDE (addressSpace=..., pc=760373582, ehSectionStart=307351848, sectionLength=491596760, fdeHint=0, fdeInfo=0x7fe2d86db7a8, cieInfo=0x7fe2d86db770) at ../contrib/libunwind/src/DwarfParser.hpp:268
#26 0x000000002f9ea5c7 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::getInfoFromDwarfSection (this=0x7fe2d86db9b8, pc=760373582, sects=..., fdeSectionOffsetHint=0) at ../contrib/libunwind/src/UnwindCursor.hpp:1653
#27 0x000000002f9e5b4b in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::setInfoBasedOnIPRegister (this=0x7fe2d86db9b8, isReturnAddress=true) at ../contrib/libunwind/src/UnwindCursor.hpp:2561
#28 0x000000002f9e5972 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::step (this=0x7fe2d86db9b8) at ../contrib/libunwind/src/UnwindCursor.hpp:2824
#29 0x000000002f9e44d1 in __unw_step (cursor=0x7fe2d86db9b8) at ../contrib/libunwind/src/libunwind.cpp:180
#30 0x000000002f9e4a05 in unw_backtrace (buffer=0x7fe4796ad988, size=32) at ../contrib/libunwind/src/libunwind.cpp:350
#31 0x0000000016ea6f22 in std::exception::capture (this=0x7fe4796ad980) at ../contrib/libcxx/include/exception:133
#32 0x0000000016ea6ef0 in std::exception::exception (this=0x7fe4796ad980) at ../contrib/libcxx/include/exception:109
#33 0x000000002bec11c0 in Poco::Exception::Exception (this=0x7fe4796ad980, msg=..., code=210) at ../contrib/poco/Foundation/src/Exception.cpp:27
#34 0x0000000016e82fce in DB::Exception::Exception (this=0x7fe4796ad980, msg=..., code=210, remote_=false) at ../src/Common/Exception.cpp:67
#35 0x00000000253d5a09 in DB::NetException::NetException (this=0x7fe4796ad980, msg=..., code=210) at ../src/Common/NetException.h:12
#36 0x0000000027734718 in DB::Connection::connect (this=0x7fe414d9bd98, timeouts=...) at ../src/Client/Connection.cpp:181
#37 0x00000000277359b2 in DB::Connection::getServerRevision (this=0x7fe414d9bd98, timeouts=...) at ../src/Client/Connection.cpp:351
#38 0x0000000027766877 in DB::ConnectionEstablisher::run (this=0x7fe4566e7300, result=..., fail_message=...) at ../src/Client/ConnectionEstablisher.cpp:42
#39 0x000000002776806b in DB::ConnectionEstablisherAsync::Routine::operator() (this=0x7fe2d86dcf20, sink=...) at ../src/Client/ConnectionEstablisher.cpp:138
#40 0x000000002776a72b in std::__1::__invoke<DB::ConnectionEstablisherAsync::Routine&, boost::context::fiber> (__f=..., __args=...) at ../contrib/libcxx/include/type_traits:3640
#41 0x000000002776a6e5 in std::__1::invoke<DB::ConnectionEstablisherAsync::Routine&, boost::context::fiber> (__f=..., __args=...) at ../contrib/libcxx/include/__functional/invoke.h:93
#42 0x000000002776a5fe in boost::context::detail::fiber_record<boost::context::fiber, FiberStack&, DB::ConnectionEstablisherAsync::Routine>::run (this=0x7fe2d86dcf00, fctx=0x7fe36f382970) at ../contrib/boost/boost/context/fiber_fcontext.hpp:140
#43 0x000000002776a4ba in boost::context::detail::fiber_entry<boost::context::detail::fiber_record<boost::context::fiber, FiberStack&, DB::ConnectionEstablisherAsync::Routine> > (t=...) at ../contrib/boost/boost/context/fiber_fcontext.hpp:80
#44 0x000000002d52614f in make_fcontext () at ../contrib/boost/libs/context/src/asm/make_x86_64_sysv_elf_gas.S:71
#45 0x0000000000000000 in ?? ()
```
cc: @KochetovNicolai | https://github.com/ClickHouse/ClickHouse/issues/38560 | https://github.com/ClickHouse/ClickHouse/pull/42874 | bb507356ef33ae55f094cadfbd8ad78e7bff51b0 | 88033562cd3906537cdf60004479495ccedd3061 | "2022-06-29T11:10:10Z" | c++ | "2022-11-08T09:52:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,538 | ["src/Interpreters/ActionsVisitor.cpp", "src/Interpreters/ActionsVisitor.h", "src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/ExpressionAnalyzer.h", "src/Interpreters/GetAggregatesVisitor.cpp", "tests/queries/0_stateless/02354_window_expression_with_aggregation_expression.reference", "tests/queries/0_stateless/02354_window_expression_with_aggregation_expression.sql"] | NOT_FOUND_COLUMN_IN_BLOCK: Not found column multiply(sum(a), 100) in block | This window function does not work
```
select
sum(a)*100/sum(sum(a)) over
(partition by b) as r
from
(
SELECT 1 as a, 2 as b
UNION ALL
SELECT 3 as a, 4 as b
UNION ALL
SELECT 5 as a, 2 as b
) as t
group by
b
```
```
Received exception from server (version 22.7.1):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column multiply(sum(a), 100) in block. Stack trace:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb8f1c3a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xb95e638 in /usr/bin/clickhouse
2. DB::ActionsDAG::updateHeader(DB::Block) const @ 0x15a85d21 in /usr/bin/clickhouse
3. DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) @ 0x17401fc4 in /usr/bin/clickhouse
4. DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG>) @ 0x17507dc0 in /usr/bin/clickhouse
5. DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x162e1152 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) @ 0x162d5d31 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x162d2f75 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x16317614 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::execute() @ 0x1631894d in /usr/bin/clickhouse
10. DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x165ecab2 in /usr/bin/clickhouse
11. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x165e9f75 in /usr/bin/clickhouse
12. DB::TCPHandler::runImpl() @ 0x171d3c5a in /usr/bin/clickhouse
13. DB::TCPHandler::run() @ 0x171e62d9 in /usr/bin/clickhouse
14. Poco::Net::TCPServerConnection::start() @ 0x19e85433 in /usr/bin/clickhouse
15. Poco::Net::TCPServerDispatcher::run() @ 0x19e867b1 in /usr/bin/clickhouse
16. Poco::PooledThread::run() @ 0x1a037b5b in /usr/bin/clickhouse
17. Poco::ThreadImpl::runnableEntry(void*) @ 0x1a035260 in /usr/bin/clickhouse
18. ? @ 0x7f5406b79609 in ?
19. clone @ 0x7f5406a9e163 in ?
. (NOT_FOUND_COLUMN_IN_BLOCK)
```
This works:
```
select
sum(a)/sum(sum(a)) over
(partition by b) as r
from
(
SELECT 1 as a, 2 as b
UNION ALL
SELECT 3 as a, 4 as b
UNION ALL
SELECT 5 as a, 2 as b
) as t
group by
b
``` | https://github.com/ClickHouse/ClickHouse/issues/38538 | https://github.com/ClickHouse/ClickHouse/pull/39112 | 71aba5b4c663716752a1677d0971f576760648fd | 9e4f516f357027779668b01b54c5cc458c82ada3 | "2022-06-28T14:50:58Z" | c++ | "2022-07-13T16:19:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,536 | ["contrib/NuRaft", "docs/en/operations/server-configuration-parameters/settings.md", "docs/ru/operations/server-configuration-parameters/settings.md", "src/Coordination/KeeperServer.cpp", "src/Coordination/KeeperServer.h"] | Clickhouse keeper ignores listen_host for port 9234 | Clickhouse keeper binding to :::9234 ignores <listen_host> from config
**How to reproduce**
Put keeper_config.xml into a local keeper directory and then mount it into a container
```
<clickhouse>
<listen_host>127.0.0.198</listen_host>
<keeper_server>
<tcp_port>9181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/logs</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<raft_configuration>
<server>
<id>1</id>
<hostname>127.0.0.1</hostname>
<port>9234</port>
<can_become_leader>true</can_become_leader>
<priority>3</priority>
</server>
</raft_configuration>
</keeper_server>
</clickhouse>
```
Run: `docker run --rm -it --net=host -v .../keeper:/etc/clickhouse-keeper/ -e KEEPER_CONFIG=/etc/clickhouse-keeper/keeper_config.xml clickhouse/clickhouse-keeper:22.6-alpine`
**Expected behavior**
`$ sudo netstat -nlpte | grep clickhouse` shows binds only on 127.0.0.198
**Actual behavior**
```
$ sudo netstat -nlpt | grep clickhouse
tcp 0 0 127.0.0.198:9181 0.0.0.0:* LISTEN 3571074/clickhouse-
tcp6 0 0 :::9234 :::* LISTEN 3571074/clickhouse-
$
``` | https://github.com/ClickHouse/ClickHouse/issues/38536 | https://github.com/ClickHouse/ClickHouse/pull/39973 | 0765495be105c4187018ff5376006da432cb01ee | a87a762b0cf6ad6fd07d9a5ae9a24c3085a42576 | "2022-06-28T14:24:59Z" | c++ | "2022-08-23T07:03:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,534 | ["tests/queries/0_stateless/02960_partition_by_udf.reference", "tests/queries/0_stateless/02960_partition_by_udf.sql"] | UDF in partition by | ```sql
CREATE FUNCTION f1 AS (x) -> x;
CREATE TABLE hit
(
`UserID` UInt32,
`URL` String,
`EventTime` DateTime )
ENGINE = MergeTree
partition by f1(URL)
ORDER BY (EventTime);
Received exception from server (version 22.6.1):
Code: 49. DB::Exception: Received from localhost:9000.
DB::Exception: minmax_count projection can only have keys about partition columns. It's a bug. (LOGICAL_ERROR)
```
( I don't need it. Just a report about LOGICAL_ERROR. I would forbid UDF in partitionby/orderby just in case )
| https://github.com/ClickHouse/ClickHouse/issues/38534 | https://github.com/ClickHouse/ClickHouse/pull/58391 | 1f5e0f52ff889e675782226c18d1029ffc731f4c | 1564b94761950fd942bc2ffab429fe91ee5d6190 | "2022-06-28T13:09:16Z" | c++ | "2024-01-01T17:54:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,532 | ["src/Columns/ColumnDecimal.h", "src/Common/IntervalKind.cpp", "src/Common/IntervalKind.h", "src/Processors/Transforms/WindowTransform.cpp", "tests/queries/0_stateless/02346_non_negative_derivative.reference", "tests/queries/0_stateless/02346_non_negative_derivative.sql"] | nonNegativeDerivative window function fails with LOGICAL_ERROR | **Describe the unexpected behaviour**
New `nonNegativeDerivative` window function fails with LOGICAL_ERROR.
**How to reproduce**
```
root@clickhouse1:/# clickhouse client
ClickHouse client version 22.6.1.1985 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.6.1 revision 54455.
clickhouse1 :) Bye.
root@clickhouse1:/# echo -e "SELECT id, metric, ts, nonNegativeDerivative(metric, ts) OVER (ORDER BY id ASC) AS nnd FROM values('id Int8, metric Float32, ts DateTime64(0)', (1,1,'2022-12-12 00:00:00'), (2,2,'2022-12-12 00:00:01'),(3,3,'2022-12-12 00:00:02')) FORMAT TabSeparatedWithNames" | clickhouse client -n 2>&1
Received exception from server (version 22.6.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column Float32 position 3: expected 3, got 6. (LOGICAL_ERROR)
(query: SELECT id, metric, ts, nonNegativeDerivative(metric, ts) OVER (ORDER BY id ASC) AS nnd FROM values('id Int8, metric Float32, ts DateTime64(0)', (1,1,'2022-12-12 00:00:00'), (2,2,'2022-12-12 00:00:01'),(3,3,'2022-12-12 00:00:02')) FORMAT TabSeparatedWithNames
)
root@clickhouse1:/#
```
**Which ClickHouse server version to use**
22.6.1.1985 (official build).
**Error message and/or stacktrace**
```
2022.06.28 15:45:36.627221 [ 9 ] {5552a5a1-7727-4ee2-a7ad-f06064384257} <Error> TCPHandler: Code: 49. DB::Exception: Invalid number of rows in Chunk column Float32 position 3: expected 3, got 6. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb8a147a in /usr/bin/clickhouse
1. DB::Chunk::checkNumRowsIsConsistent() @ 0x1718194e in /usr/bin/clickhouse
2. DB::WindowTransform::prepare() @ 0x173d497f in /usr/bin/clickhouse
3. DB::ExecutingGraph::updateNode(unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&) @ 0x171a9b59 in /usr/bin/clickhouse
4. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x171a440b in /usr/bin/clickhouse
5. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x171a3921 in /usr/bin/clickhouse
6. DB::PipelineExecutor::execute(unsigned long) @ 0x171a36b8 in /usr/bin/clickhouse
7. ? @ 0x171b38ce in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb94d0b7 in /usr/bin/clickhouse
9. ? @ 0xb9504dd in /usr/bin/clickhouse
10. ? @ 0x7f7cd5e02609 in ?
11. clone @ 0x7f7cd5d29293 in ?
```
| https://github.com/ClickHouse/ClickHouse/issues/38532 | https://github.com/ClickHouse/ClickHouse/pull/38774 | db838f1343567321da844725f4d54331f5985e20 | f15d9ca59cf5edae887f77eb745207a8233bfc98 | "2022-06-28T12:49:31Z" | c++ | "2022-07-07T18:39:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,498 | ["src/Common/parseGlobs.cpp", "src/Common/tests/gtest_makeRegexpPatternFromGlobs.cpp", "tests/queries/0_stateless/02297_regex_parsing_file_names.reference", "tests/queries/0_stateless/02297_regex_parsing_file_names.sh"] | `{0..10}` expansion in external file names is incorrect. | It is interpreted as an "aligned" expansion producing something like 00, 01, 02... 10 instead of 0, 1, 2... 10.
https://pastila.nl/?09014b44/8155336bcfbf55d813814753465a1adf | https://github.com/ClickHouse/ClickHouse/issues/38498 | https://github.com/ClickHouse/ClickHouse/pull/38502 | d6a09acb382103428a86302496fc7d17074761ab | 110862a11ec2e8bf8ec8e47a132cb69e8466e2c0 | "2022-06-28T00:55:44Z" | c++ | "2022-07-14T06:27:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,474 | ["base/base/safeExit.cpp"] | Thread leak (TCPHandler) | https://s3.amazonaws.com/clickhouse-test-reports/38335/fe1b1aa77ba89d68abebc62dac536dee17d4ddc9/stress_test__thread__actions_.html
```
==================
WARNING: ThreadSanitizer: thread leak (pid=643)
Thread T845 'TCPHandler' (tid=10064, finished) created by thread T250 at:
#0 pthread_create <null> (clickhouse+0xb31eb2d) (BuildId: ff6deb04352baabe)
#1 Poco::ThreadImpl::startImpl(Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:202:6 (clickhouse+0x1ef077c2) (BuildId: ff6deb04352baabe)
#2 Poco::Thread::start(Poco::Runnable&) build_docker/../contrib/poco/Foundation/src/Thread.cpp:128:2 (clickhouse+0x1ef0904c) (BuildId: ff6deb04352baabe)
#3 Poco::PooledThread::start() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:85:10 (clickhouse+0x1ef0d126) (BuildId: ff6deb04352baabe)
#4 Poco::ThreadPool::getThread() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:461:14 (clickhouse+0x1ef0d126)
#5 Poco::ThreadPool::startWithPriority(Poco::Thread::Priority, Poco::Runnable&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:365:2 (clickhouse+0x1ef0d527) (BuildId: ff6deb04352baabe)
#6 Poco::Net::TCPServerDispatcher::enqueue(Poco::Net::StreamSocket const&) build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:152:17 (clickhouse+0x1ecc052e) (BuildId: ff6deb04352baabe)
#7 Poco::Net::TCPServer::run() build_docker/../contrib/poco/Net/src/TCPServer.cpp:148:21 (clickhouse+0x1ecbf047) (BuildId: ff6deb04352baabe)
#8 Poco::(anonymous namespace)::RunnableHolder::run() build_docker/../contrib/poco/Foundation/src/Thread.cpp:55:11 (clickhouse+0x1ef0964f) (BuildId: ff6deb04352baabe)
#9 Poco::ThreadImpl::runnableEntry(void*) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27 (clickhouse+0x1ef07d8b) (BuildId: ff6deb04352baabe)
SUMMARY: ThreadSanitizer: thread leak (/usr/bin/clickhouse+0xb31eb2d) (BuildId: ff6deb04352baabe) in pthread_create
==================
```
```
zgrep -Fa "[ 10064 ]" clickhouse-server.stress.log.gz | tail -50
2022.06.27 19:14:39.634255 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Trace> ContextAccess (default): Access granted: CREATE DATABASE ON test_puuot9.*
2022.06.27 19:14:39.635504 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Information> DatabaseAtomic (test_puuot9): Metadata processed, database test_puuot9 has 0 tables and 0 dictionaries in total.
2022.06.27 19:14:39.635566 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 0.000203696 sec
2022.06.27 19:14:39.635622 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Test> TablesLoader: Have 0 independent tables:
2022.06.27 19:14:39.635659 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Test> TablesLoader: Have 0 independent tables:
2022.06.27 19:14:39.635695 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Information> TablesLoader: Loading 0 tables with 0 dependency level
2022.06.27 19:14:39.635737 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Information> DatabaseAtomic (test_puuot9): Starting up tables.
2022.06.27 19:14:39.636234 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Debug> DynamicQueryHandler: Done processing query
2022.06.27 19:14:39.639679 [ 10064 ] {c0a004f5-ba61-42cd-9b08-f6d9941ed326} <Debug> MemoryTracker: Peak memory usage (for query): 1.00 MiB.
2022.06.27 19:14:39.639815 [ 10064 ] {} <Debug> MemoryTracker: Peak memory usage (for query): 1.00 MiB.
2022.06.27 19:14:39.639909 [ 10064 ] {} <Debug> HTTP-Session: 50b9622d-d92a-4c9e-a9a7-632aea76cebf Destroying unnamed session
2022.06.27 19:14:40.486714 [ 10064 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: POST, Address: [::1]:35116, User-Agent: (none), Length: 0, Content Type: , Transfer Encoding: identity, X-Forwarded-For: (none)
2022.06.27 19:14:40.487159 [ 10064 ] {} <Trace> DynamicQueryHandler: Request URI: /?allow_experimental_database_replicated=1&query=DROP+DATABASE+test_gp9gsm&database=system&connect_timeout=598&receive_timeout=598&send_timeout=598&http_connection_timeout=598&http_receive_timeout=598&http_send_timeout=598&log_comment=02302_s3_file_pruning.sql
2022.06.27 19:14:40.487305 [ 10064 ] {} <Debug> HTTP-Session: 4036e40d-12f3-4814-90a0-94822ff3dedf Authenticating user 'default' from [::1]:35116
2022.06.27 19:14:40.487595 [ 10064 ] {} <Debug> HTTP-Session: 4036e40d-12f3-4814-90a0-94822ff3dedf Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2022.06.27 19:14:40.493948 [ 10064 ] {984606eb-cae4-40d4-9ae9-4c50fc3ce6b8} <Debug> executeQuery: (from [::1]:35116) (comment: 02302_s3_file_pruning.sql) DROP DATABASE test_gp9gsm (stage: Complete)
2022.06.27 19:14:40.494162 [ 10064 ] {984606eb-cae4-40d4-9ae9-4c50fc3ce6b8} <Trace> ContextAccess (default): Access granted: DROP DATABASE ON test_gp9gsm.*
2022.06.27 19:14:40.583326 [ 10064 ] {984606eb-cae4-40d4-9ae9-4c50fc3ce6b8} <Debug> DynamicQueryHandler: Done processing query
2022.06.27 19:14:40.585370 [ 10064 ] {984606eb-cae4-40d4-9ae9-4c50fc3ce6b8} <Debug> MemoryTracker: Peak memory usage (for query): 1.00 MiB.
2022.06.27 19:14:40.585462 [ 10064 ] {} <Debug> MemoryTracker: Peak memory usage (for query): 1.00 MiB.
2022.06.27 19:14:40.585520 [ 10064 ] {} <Debug> HTTP-Session: 4036e40d-12f3-4814-90a0-94822ff3dedf Destroying unnamed session
2022.06.27 19:15:00.226252 [ 10064 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: [::1]:34062
2022.06.27 19:15:00.248440 [ 10064 ] {} <Debug> TCPHandler: Connected ClickHouse client version 22.7.0, revision: 54456, database: test_voh2z3, user: u02294.
2022.06.27 19:15:00.248576 [ 10064 ] {} <Debug> TCP-Session: 51cfd390-f2d6-4d00-b99f-d44765db0881 Authenticating user 'u02294' from [::1]:34062
2022.06.27 19:15:00.248676 [ 10064 ] {} <Debug> TCP-Session: 51cfd390-f2d6-4d00-b99f-d44765db0881 Authenticated with global context as user 2c40c8e1-a161-90bd-4874-27cc5f63119d
2022.06.27 19:15:00.248767 [ 10064 ] {} <Debug> TCP-Session: 51cfd390-f2d6-4d00-b99f-d44765db0881 Creating session context with user_id: 2c40c8e1-a161-90bd-4874-27cc5f63119d
2022.06.27 19:15:04.796915 [ 10064 ] {} <Debug> TCP-Session: 51cfd390-f2d6-4d00-b99f-d44765db0881 Creating query context from session context, user_id: 2c40c8e1-a161-90bd-4874-27cc5f63119d, parent context user: u02294
2022.06.27 19:15:05.666682 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> executeQuery: (from [::1]:34062, user: u02294) (comment: 02294_overcommit_overflow.sh) SELECT number FROM numbers(130000) GROUP BY number SETTINGS max_memory_usage_for_user=5000000,memory_overcommit_ratio_denominator=2000000000000000000,memory_usage_overcommit_max_wait_microseconds=500 (stage: Complete)
2022.06.27 19:15:10.018819 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Trace> ContextAccess (u02294): Access granted: CREATE TEMPORARY TABLE ON *.*
2022.06.27 19:15:12.572547 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2022.06.27 19:15:12.573066 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Trying to choose query to stop from 9 queries
2022.06.27 19:15:12.573137 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 0/2000000000000000000
2022.06.27 19:15:12.573184 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 0/2000000000000000000
2022.06.27 19:15:12.573220 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 0/2000000000000000000
2022.06.27 19:15:12.573251 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 0/2000000000000000000
2022.06.27 19:15:12.573281 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 2101278/2000000000000000000
2022.06.27 19:15:12.573312 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 2101278/2000000000000000000
2022.06.27 19:15:12.573342 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 0/2000000000000000000
2022.06.27 19:15:12.573371 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 0/2000000000000000000
2022.06.27 19:15:12.573400 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Query has ratio 2101278/2000000000000000000
2022.06.27 19:15:12.573431 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Selected to stop query with overcommit ratio 2101278/2000000000000000000
2022.06.27 19:15:12.574046 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> UserOvercommitTracker: Memory was not freed within timeout
2022.06.27 19:15:44.143125 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Error> executeQuery: Code: 241. DB::Exception: Memory limit (for user) exceeded: would use 6.01 MiB (attempt to allocate chunk of 2101278 bytes), maximum: 4.77 MiB. OvercommitTracker decision: Waiting timeout for memory to be freed is reached.. (MEMORY_LIMIT_EXCEEDED) (version 22.7.1.1) (from [::1]:34062) (comment: 02294_overcommit_overflow.sh) (in query: SELECT number FROM numbers(130000) GROUP BY number SETTINGS max_memory_usage_for_user=5000000,memory_overcommit_ratio_denominator=2000000000000000000,memory_usage_overcommit_max_wait_microseconds=500), Stack trace (when copying this message, always include the lines below):
2022.06.27 19:15:44.162817 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Error> TCPHandler: Code: 241. DB::Exception: Memory limit (for user) exceeded: would use 6.01 MiB (attempt to allocate chunk of 2101278 bytes), maximum: 4.77 MiB. OvercommitTracker decision: Waiting timeout for memory to be freed is reached.. (MEMORY_LIMIT_EXCEEDED), Stack trace (when copying this message, always include the lines below):
2022.06.27 19:15:44.198689 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Warning> TCPHandler: Client has gone away.
2022.06.27 19:15:44.208398 [ 10064 ] {8f344ba3-cd8d-455c-8a43-dab96368b8a2} <Debug> MemoryTracker: Peak memory usage (for query): 2.00 MiB.
2022.06.27 19:15:44.208489 [ 10064 ] {} <Debug> MemoryTracker: Peak memory usage (for query): 2.00 MiB.
2022.06.27 19:15:44.208550 [ 10064 ] {} <Debug> TCPHandler: Processed in 42.965048756 sec.
2022.06.27 19:15:44.208692 [ 10064 ] {} <Debug> TCPHandler: Done processing connection.
2022.06.27 19:15:44.229543 [ 10064 ] {} <Debug> TCP-Session: 51cfd390-f2d6-4d00-b99f-d44765db0881 Destroying unnamed session
``` | https://github.com/ClickHouse/ClickHouse/issues/38474 | https://github.com/ClickHouse/ClickHouse/pull/43009 | c5edea19f381076fb01184fb2d31b91393d4bba6 | fc77d53db1165ffc01af356b2b44cb52976a4270 | "2022-06-27T17:48:40Z" | c++ | "2022-11-07T19:20:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,471 | ["tests/ci/fast_test_check.py", "tests/ci/pr_info.py", "tests/ci/run_check.py", "tests/ci/style_check.py"] | Make `FastTest` and `StyleCheck` required for merge checks | We want to gradually make our checks required for merge: https://github.com/ClickHouse/ClickHouse/issues/18507.
The first step is to implement such requirements for StyleCheck and FastTest. GitHub has a mechanism (https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests) that allows you to make check required. Unfortunately, we cannot use it directly because sometimes we don't have FastTest at all (documentation check) or want to merge without it. So the solution will be divided into three steps:
1. Implement a new check (MergeCheck https://github.com/ClickHouse/ClickHouse/tree/master/tests/ci) which will depend on FastTest and StyleCheck in workflows where they exist and be always green where we don't have such checks (example of dependencies https://github.com/ClickHouse/ClickHouse/blob/master/.github/workflows/pull_request.yml#L253-L254).
2. Implement additional workflow which will be triggered by "label" (https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#label) events and mark MergeCheck status green.
3. Enable protection rule which doesn't allow to merge anything without green MergeCheck. | https://github.com/ClickHouse/ClickHouse/issues/38471 | https://github.com/ClickHouse/ClickHouse/pull/38744 | bd97233a4f64f2148c44857aaea3115a6debe668 | af1136c9906c49dcc03986318c5717914e73ab96 | "2022-06-27T17:01:09Z" | c++ | "2022-07-08T08:17:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,460 | ["src/Functions/getScalar.cpp", "tests/queries/0_stateless/02375_scalar_lc_cte.reference", "tests/queries/0_stateless/02375_scalar_lc_cte.sql"] | Scalar CTE with LowCardinality result: Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got ColumnLowCardinality | ```
WITH ( SELECT toLowCardinality('a') ) AS bar
SELECT bar
```
```
Received exception from server (version 22.6.1):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got ColumnLowCardinality: While processing __getScalar('9926677614433849525_14785918208164099665') AS bar. Stack trace:
0. ./build_docker/../contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x17e81fcc in /usr/bin/clickhouse
1. ./build_docker/../src/Common/Exception.cpp:69: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xaeeb0fa in /usr/bin/clickhouse
2. ./build_docker/../src/Columns/ColumnUnique.h:0: COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnString>::uniqueInsertRangeImpl<char8_t>(DB::IColumn const&, unsigned long, unsigned long, unsigned long, DB::ColumnVector<char8_t>::MutablePtr&&, DB::ReverseIndex<unsigned long, DB::ColumnString>*, unsigned long) @ 0x136bc93d in /usr/bin/clickhouse
3. ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:98: COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnString>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long)::'lambda'(auto)::operator()<char8_t>(auto) const @ 0x136bbddc in /usr/bin/clickhouse
4. ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: DB::ColumnUnique<DB::ColumnString>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) @ 0x136bad7b in /usr/bin/clickhouse
5. ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:138: DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x133410fe in /usr/bin/clickhouse
6. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x13341c52 in /usr/bin/clickhouse
7. ./build_docker/../src/Interpreters/ActionsDAG.cpp:0: DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x138922d9 in /usr/bin/clickhouse
8. ./build_docker/../src/Interpreters/ActionsVisitor.cpp:0: DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x13b877db in /usr/bin/clickhouse
9. ./build_docker/../contrib/libcxx/include/string:1445: DB::ActionsMatcher::Data::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x13b932cf in /usr/bin/clickhouse
10. ./build_docker/../contrib/libcxx/include/string:1445: DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x13b8d0ae in /usr/bin/clickhouse
11. ./build_docker/../src/Interpreters/ActionsVisitor.cpp:766: DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x13b8fa1d in /usr/bin/clickhouse
12. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:43: DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x13b73855 in /usr/bin/clickhouse
13. ./build_docker/../src/Interpreters/ActionsVisitor.h:185: DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x13b65563 in /usr/bin/clickhouse
14. ./build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:702: DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x13b6c8f5 in /usr/bin/clickhouse
15. ./build_docker/../src/Interpreters/ExpressionAnalyzer.cpp:1867: DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x13b70a06 in /usr/bin/clickhouse
16. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:722: DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x13fe26a4 in /usr/bin/clickhouse
17. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:582: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >)::$_1::operator()(bool) const @ 0x13fdc963 in /usr/bin/clickhouse
18. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:588: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) @ 0x13fd87ad in /usr/bin/clickhouse
19. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:173: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x13fd6b52 in /usr/bin/clickhouse
20. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:725: DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x140304df in /usr/bin/clickhouse
21. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:149: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1402efe4 in /usr/bin/clickhouse
22. ./build_docker/../contrib/libcxx/include/vector:399: std::__1::__unique_if<DB::InterpreterSelectWithUnionQuery>::__unique_single std::__1::make_unique<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&>(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&) @ 0x13fa6ef4 in /usr/bin/clickhouse
23. ./build_docker/../src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x13fa61b8 in /usr/bin/clickhouse
24. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x142734e1 in /usr/bin/clickhouse
25. ./build_docker/../src/Interpreters/executeQuery.cpp:1069: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x14271c0d in /usr/bin/clickhouse
26. ./build_docker/../src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x14be2cfe in /usr/bin/clickhouse
27. ./build_docker/../src/Server/TCPHandler.cpp:1783: DB::TCPHandler::run() @ 0x14bee8f9 in /usr/bin/clickhouse
28. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x17d6c387 in /usr/bin/clickhouse
29. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: Poco::Net::TCPServerDispatcher::run() @ 0x17d6c847 in /usr/bin/clickhouse
30. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x17ee42e7 in /usr/bin/clickhouse
31. ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::ThreadImpl::runnableEntry(void*) @ 0x17ee1cc6 in /usr/bin/clickhouse
. (ILLEGAL_COLUMN)
``` | https://github.com/ClickHouse/ClickHouse/issues/38460 | https://github.com/ClickHouse/ClickHouse/pull/39716 | e6efb47aa362d1ce0731e4f1f7e4070cd6eaa367 | b84e65bb3b7d4162e9caf0fadd296a895db38b3e | "2022-06-27T13:54:05Z" | c++ | "2022-08-03T16:53:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,457 | ["docs/en/sql-reference/functions/geo/index.md"] | missing index page in Geo section | https://clickhouse.com/docs/en/sql-reference/functions/geo/
Should list 4 subpages. | https://github.com/ClickHouse/ClickHouse/issues/38457 | https://github.com/ClickHouse/ClickHouse/pull/38549 | 80e7abd0feaffd3441afa8f0f73f2b73e028f3ad | c45e9a36bd7dd7d72ce3d3ee1aa12d340543051b | "2022-06-27T12:08:24Z" | c++ | "2022-06-29T10:11:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,425 | ["src/Parsers/ParserCreateQuery.cpp", "tests/queries/0_stateless/02345_create_table_allow_trailing_comma.reference", "tests/queries/0_stateless/02345_create_table_allow_trailing_comma.sql"] | Allow trailing comma in columns list | **Use case**
```
CREATE TABLE t
(
x UInt8,
y UInt8,
)
``` | https://github.com/ClickHouse/ClickHouse/issues/38425 | https://github.com/ClickHouse/ClickHouse/pull/38440 | 4cda5491f639081428765181996cf1e118af5cfa | 0d27672ac8132626daccda9aca4edca271134234 | "2022-06-25T04:30:41Z" | c++ | "2022-06-29T06:47:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,423 | ["src/Functions/filesystem.cpp", "tests/queries/0_stateless/02345_filesystem_local.reference", "tests/queries/0_stateless/02345_filesystem_local.sh"] | filesystem* functions don't work in `clickhouse-local` | ```
milovidov@milovidov-desktop:~/work/ClickHouse/benchmark/clickhouse-benchmark$ clickhouse-local
ClickHouse local version 22.7.1.1.
milovidov-desktop :) SELECT filesystemAvailable()
SELECT filesystemAvailable()
Query id: 3ef2d5d8-b3e7-470d-a1e0-e6146f18b9b8
0 rows in set. Elapsed: 0.145 sec.
Received exception:
Code: 1001. DB::Exception: Poco::NotFoundException: Not found. (STD_EXCEPTION)
```
And they give inferior error message. | https://github.com/ClickHouse/ClickHouse/issues/38423 | https://github.com/ClickHouse/ClickHouse/pull/38424 | 7439ed8403a9f6df1245cb6215288fada7163a3c | 3c196214543143f0a75be8812f1517419a34da7e | "2022-06-25T04:04:23Z" | c++ | "2022-06-25T09:10:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,396 | ["src/AggregateFunctions/AggregateFunctionRankCorrelation.h", "src/AggregateFunctions/StatCommon.h", "tests/queries/0_stateless/01848_http_insert_segfault.sh", "tests/queries/0_stateless/02347_rank_corr_nan.reference", "tests/queries/0_stateless/02347_rank_corr_nan.sql"] | RankCorrelationData::getResult() takes long time | ```
CREATE TABLE default.`01802_empsalary`
(
`depname` LowCardinality(String),
`empno` UInt64,
`salary` Int32,
`enroll_date` Date
)
ENGINE = MergeTree
ORDER BY enroll_date
SETTINGS index_granularity = 8192;
insert into 01802_empsalary values ('sales',1,5000,'2006-10-01'),('develop',8,6000,'2006-10-01'),('personnel',2,3900,'2006-12-23'),('develop',10,5200,'2007-08-01'),('sales',3,4800,'2007-08-01'),('sales',4,4801,'2007-08-08'),('develop',11,5200,'2007-08-15'),('personnel',5,3500,'2007-12-10'),('develop',7,4200,'2008-01-01'),('develop',9,4500,'2008-01-01');
```
Query hangs
```
SELECT rankCorr(salary, nan) OVER (ORDER BY salary ASC Rows BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func
FROM `01802_empsalary
```
```
WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS `all`
SELECT
thread_name,
thread_id,
query_id,
arrayStringConcat(`all`, '\n') AS res
FROM system.stack_trace
WHERE query_id = '41603474-6fdc-4790-bbed-53d919014ba5'
Query id: 9662b2c0-2a67-418c-8340-d3f19fea5c75
Row 1:
ββββββ
thread_name: TCPHandler
thread_id: 21417
query_id: 41603474-6fdc-4790-bbed-53d919014ba5
res:
ConcurrentBoundedQueue<DB::Chunk>::popImpl(DB::Chunk&, std::__1::optional<unsigned long>)
DB::LazyOutputFormat::getChunk(unsigned long)
DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)
DB::PullingAsyncPipelineExecutor::pull(DB::Block&, unsigned long)
DB::TCPHandler::processOrdinaryQueryWithProcessors()
DB::TCPHandler::runImpl()
DB::TCPHandler::run()
Poco::Net::TCPServerConnection::start()
Poco::Net::TCPServerDispatcher::run()
Poco::PooledThread::run()
Poco::ThreadImpl::runnableEntry(void*)
__clone
Row 2:
ββββββ
thread_name: QueryPullPipeEx
thread_id: 35307
query_id: 41603474-6fdc-4790-bbed-53d919014ba5
res: std::__1::pair<std::__1::vector<double, std::__1::allocator<double> >, double> DB::computeRanksAndTieCorrection<DB::PODArray<double, 32ul, DB::MixedArenaAllocator<4096ul, Allocator<false, false>, DB::AlignedArenaAllocator<8ul>, 8ul>, 0ul, 0ul> >(DB::PODArray<double, 32ul, DB::MixedArenaAllocator<4096ul, Allocator<false, false>, DB::AlignedArenaAllocator<8ul>, 8ul>, 0ul, 0ul> const&)
DB::RankCorrelationData::getResult()
DB::AggregateFunctionRankCorrelation::insertResultInto(char*, DB::IColumn&, DB::Arena*) const
DB::WindowTransform::writeOutCurrentRow()
DB::WindowTransform::appendChunk(DB::Chunk&)
DB::WindowTransform::work()
DB::ExecutionThreadContext::executeTask()
DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*)
DB::PipelineExecutor::executeImpl(unsigned long)
DB::PipelineExecutor::execute(unsigned long)
void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*)
ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>)
void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*)
__clone
2 rows in set. Elapsed: 0.020 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/38396 | https://github.com/ClickHouse/ClickHouse/pull/38722 | 1ee752b9a599dffea4c5d3d20aa7e1f90c9ae381 | bfc9ed61725ae1e123a0ba7ef5b4ccce8491f153 | "2022-06-24T16:30:33Z" | c++ | "2022-07-03T17:30:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,383 | ["src/AggregateFunctions/AggregateFunctionCategoricalInformationValue.cpp", "tests/queries/0_stateless/02427_column_nullable_ubsan.reference", "tests/queries/0_stateless/02427_column_nullable_ubsan.sql"] | fuzzer: failed assert_cast to ColumnNullable | ERROR: type should be string, got "\r\nhttps://s3.amazonaws.com/clickhouse-test-reports/36171/f7b329ee57bd6ddc758e3de4a07ae33af989124e/fuzzer_astfuzzerubsan,actions//report.html\r\n```\r\n../src/Common/assert_cast.h:50:12: runtime error: downcast of address 0x7f7ca4004220 which does not point to an object of type 'const DB::ColumnNullable'\r\n0x7f7ca4004220: note: object is of type 'DB::ColumnConst'\r\n 00 00 00 00 b0 19 c4 0a 00 00 00 00 01 00 00 00 00 00 00 00 e0 63 23 a0 7c 7f 00 00 01 00 00 00\r\n ^~~~~~~~~~~~~~~~~~~~~~~\r\n vptr for 'DB::ColumnConst'\r\n2022.06.24 00:44:49.000150 [ 397 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 786.19 MiB, peak 788.19 MiB, will set to 796.76 MiB (RSS), difference: 10.58 MiB\r\n #0 0x242f2b6b in DB::ColumnNullable const& assert_cast<DB::ColumnNullable const&, DB::IColumn const&>(DB::IColumn const&) build_docker/../src/Common/assert_cast.h:50:12\r\n #1 0x242f2b6b in DB::ColumnNullable::compareAtImpl(unsigned long, unsigned long, DB::IColumn const&, int, Collator const*) const build_docker/../src/Columns/ColumnNullable.cpp:317:43\r\n #2 0x254bcb70 in DB::(anonymous namespace)::compareWithThreshold(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, DB::SortDescription const&) build_docker/../src/Processors/Transforms/PartialSortingTransform.cpp:74:73\r\n #3 0x254bba4a in DB::PartialSortingTransform::transform(DB::Chunk&) build_docker/../src/Processors/Transforms/PartialSortingTransform.cpp:157:13\r\n #4 0x1d43caea in DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) build_docker/../src/Processors/ISimpleTransform.h:32:9\r\n #5 0x250e3590 in DB::ISimpleTransform::work() build_docker/../src/Processors/ISimpleTransform.cpp:89:9\r\n #6 0x2510a97f in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47:26\r\n #7 0x2510a556 in DB::ExecutionThreadContext::executeTask() build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:86:9\r\n #8 0x250fcb32 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:222:26\r\n #9 0x250fc4e3 in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:187:5\r\n #10 0x250fc4e3 in DB::PipelineExecutor::executeImpl(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:331:9\r\n #11 0x250fc089 in DB::PipelineExecutor::execute(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:88:9\r\n #12 0x2510f182 in DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) build_docker/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:79:24\r\n #13 0x2510f0e1 in DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const build_docker/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:107:13\r\n #14 0x2510f0e1 in decltype(static_cast<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) build_docker/../contrib/libcxx/include/type_traits:3648:23\r\n #15 0x2510ef9a in decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/../contrib/libcxx/include/tuple:1595:1\r\n #16 0x2510ef9a in decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) build_docker/../contrib/libcxx/include/tuple:1604:1\r\n #17 0x2510ef9a in ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() build_docker/../src/Common/ThreadPool.h:188:13\r\n #18 0x2510ef9a in decltype(static_cast<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) build_docker/../contrib/libcxx/include/type_traits:3640:23\r\n #19 0xff2aa20 in std::__1::__function::__policy_func<void ()>::operator()() const build_docker/../contrib/libcxx/include/__functional/function.h:843:16\r\n #20 0xff2aa20 in std::__1::function<void ()>::operator()() const build_docker/../contrib/libcxx/include/__functional/function.h:1184:12\r\n #21 0xff2aa20 in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/../src/Common/ThreadPool.cpp:281:17\r\n #22 0xff2d5e1 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const build_docker/../src/Common/ThreadPool.cpp:143:73\r\n #23 0xff2d5e1 in decltype(static_cast<void>(fp)()) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&) build_docker/../contrib/libcxx/include/type_traits:3640:23\r\n #24 0xff2d5e1 in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/../contrib/libcxx/include/thread:282:5\r\n #25 0xff2d5e1 in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) build_docker/../contrib/libcxx/include/thread:293:5\r\n #26 0x7f7e2c1fe608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8\r\n #27 0x7f7e2c123132 in __clone /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95\r\n\r\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../src/Common/assert_cast.h:50:12 in \r\n2022.06.24 00:44:49.696666 [ 114 ] {} <Trace> BaseDaemon: Received signal -3\r\n2022.06.24 00:44:49.697091 [ 428 ] {} <Fatal> BaseDaemon: ########################################\r\n2022.06.24 00:44:49.697199 [ 428 ] {} <Fatal> BaseDaemon: (version 22.7.1.1, build id: 1676B9CB06093897) (from thread 411) (query_id: e0ce4034-e820-41f4-b995-23087965ec08) (query: SELECT * FROM (SELECT * FROM (SELECT 0 AS a, toNullable(number) AS b, toString(number) AS c FROM numbers(1000000.)) ORDER BY a DESC, b DESC, c ASC LIMIT 1500) LIMIT 10) Received signal Unknown signal (-3)\r\n2022.06.24 00:44:49.697232 [ 428 ] {} <Fatal> BaseDaemon: Sanitizer trap.\r\n2022.06.24 00:44:49.697280 [ 428 ] {} <Fatal> BaseDaemon: Stack trace: 0xfe784e3 0x100e7d11 0xfe2ae86 0xfe3fa8d 0x242f2b6c 0x254bcb71 0x254bba4b 0x1d43caeb 0x250e3591 0x2510a980 0x2510a557 0x250fcb33 0x250fc4e4 0x250fc08a 0x2510f183 0x2510f0e2 0x2510ef9b 0xff2aa21 0xff2d5e2 0x7f7e2c1fe609 0x7f7e2c123133\r\n2022.06.24 00:44:49.706415 [ 428 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:305: StackTrace::tryCapture()\r\n2022.06.24 00:44:49.706450 [ 428 ] {} <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:266: StackTrace::StackTrace() @ 0xfe784e3 in /workspace/clickhouse\r\n2022.06.24 00:44:49.722487 [ 428 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:0: sanitizerDeathCallback() @ 0x100e7d11 in /workspace/clickhouse\r\n2022.06.24 00:44:50.000109 [ 397 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 796.76 MiB, peak 796.76 MiB, will set to 875.25 MiB (RSS), difference: 78.49 MiB\r\n2022.06.24 00:44:50.628688 [ 428 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xfe2ae86 in /workspace/clickhouse\r\n2022.06.24 00:44:51.000108 [ 397 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 875.25 MiB, peak 875.25 MiB, will set to 1.19 GiB (RSS), difference: 338.25 MiB\r\n2022.06.24 00:44:51.000145 [ 397 ] {} <Debug> MemoryTracker: Current memory usage (total): 1.19 GiB.\r\n2022.06.24 00:44:51.518815 [ 428 ] {} <Fatal> BaseDaemon: 3. ? @ 0xfe3fa8d in /workspace/clickhouse\r\n2022.06.24 00:44:51.531085 [ 428 ] {} <Fatal> BaseDaemon: 4.1. inlined from ./build_docker/../src/Common/assert_cast.h:0: DB::ColumnNullable const& assert_cast<DB::ColumnNullable const&, DB::IColumn const&>(DB::IColumn const&)\r\n2022.06.24 00:44:51.531131 [ 428 ] {} <Fatal> BaseDaemon: 4. ../src/Columns/ColumnNullable.cpp:317: DB::ColumnNullable::compareAtImpl(unsigned long, unsigned long, DB::IColumn const&, int, Collator const*) const @ 0x242f2b6c in /workspace/clickhouse\r\n2022.06.24 00:44:51.549196 [ 428 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Processors/Transforms/PartialSortingTransform.cpp:0: DB::(anonymous namespace)::compareWithThreshold(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, DB::SortDescription const&) @ 0x254bcb71 in /workspace/clickhouse\r\n2022.06.24 00:44:51.564524 [ 428 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Processors/Transforms/PartialSortingTransform.cpp:156: DB::PartialSortingTransform::transform(DB::Chunk&) @ 0x254bba4b in /workspace/clickhouse\r\n2022.06.24 00:44:51.580101 [ 428 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Processors/ISimpleTransform.h:33: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x1d43caeb in /workspace/clickhouse\r\n2022.06.24 00:44:51.591796 [ 428 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Processors/ISimpleTransform.cpp:99: DB::ISimpleTransform::work() @ 0x250e3591 in /workspace/clickhouse\r\n2022.06.24 00:44:51.599182 [ 428 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:50: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x2510a980 in /workspace/clickhouse\r\n2022.06.24 00:44:51.605811 [ 428 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:87: DB::ExecutionThreadContext::executeTask() @ 0x2510a557 in /workspace/clickhouse\r\n2022.06.24 00:44:51.620511 [ 428 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:222: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x250fcb33 in /workspace/clickhouse\r\n2022.06.24 00:44:51.633345 [ 428 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:333: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x250fc4e4 in /workspace/clickhouse\r\n2022.06.24 00:44:51.647674 [ 428 ] {} <Fatal> BaseDaemon: 13.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:284: std::__1::unique_ptr<DB::ExecutingGraph, std::__1::default_delete<DB::ExecutingGraph> >::operator->() const\r\n2022.06.24 00:44:51.647719 [ 428 ] {} <Fatal> BaseDaemon: 13. ../src/Processors/Executors/PipelineExecutor.cpp:91: DB::PipelineExecutor::execute(unsigned long) @ 0x250fc08a in /workspace/clickhouse\r\n2022.06.24 00:44:51.662334 [ 428 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:0: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x2510f183 in /workspace/clickhouse\r\n2022.06.24 00:44:51.676850 [ 428 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../contrib/libcxx/include/type_traits:3648: decltype(static_cast<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) @ 0x2510f0e2 in /workspace/clickhouse\r\n2022.06.24 00:44:51.691297 [ 428 ] {} <Fatal> BaseDaemon: 16.1. inlined from ./build_docker/../contrib/libcxx/include/tuple:0: operator()\r\n2022.06.24 00:44:51.691330 [ 428 ] {} <Fatal> BaseDaemon: 16. ../contrib/libcxx/include/type_traits:3640: decltype(static_cast<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) @ 0x2510ef9b in /workspace/clickhouse\r\n2022.06.24 00:44:51.701042 [ 428 ] {} <Fatal> BaseDaemon: 17.1. inlined from ./build_docker/../contrib/libcxx/include/__functional/function.h:1157: std::__1::function<void ()>::operator=(std::nullptr_t)\r\n2022.06.24 00:44:51.701073 [ 428 ] {} <Fatal> BaseDaemon: 17. ../src/Common/ThreadPool.cpp:284: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xff2aa21 in /workspace/clickhouse\r\n2022.06.24 00:44:51.712213 [ 428 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../src/Common/ThreadPool.cpp:0: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xff2d5e2 in /workspace/clickhouse\r\n2022.06.24 00:44:51.712243 [ 428 ] {} <Fatal> BaseDaemon: 19. ? @ 0x7f7e2c1fe609 in ?\r\n2022.06.24 00:44:51.712265 [ 428 ] {} <Fatal> BaseDaemon: 20. __clone @ 0x7f7e2c123133 in ?\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/38383 | https://github.com/ClickHouse/ClickHouse/pull/41463 | 560bc2bc22e2cb562bc0eebd58d90c7f2832a5cd | df52df83f9195d6d2c26b00fa04fd904b0c8f1a6 | "2022-06-24T12:21:45Z" | c++ | "2022-09-19T03:27:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,357 | ["docs/en/operations/server-configuration-parameters/settings.md", "docs/en/operations/settings/settings.md"] | Docs mention max_concurrent_queries_for_all_users as server-level but it produces an error | `max-concurrent-queries-for-all-users` (and `max_concurrent_queries_for_user`) are in [server configuration docs](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings/#max-concurrent-queries-for-all-users) but using it like this leads to `<Error> Application: DB::Exception: A setting 'max_concurrent_queries_for_all_users' appeared at top level in config /etc/clickhouse-server/config.xml. But it is user-level setting that should be located in users.xml inside <profiles> section for specific profile.`
ClickHouse version: 22.6.1.1985 | https://github.com/ClickHouse/ClickHouse/issues/38357 | https://github.com/ClickHouse/ClickHouse/pull/50478 | c97f1735671a49293614162ae65f6df612009e40 | fb11f7eb6f8c53ba7a2c9066b4e001f529a6a0b1 | "2022-06-23T19:04:18Z" | c++ | "2023-06-02T14:56:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,333 | ["src/Functions/FunctionDateOrDateTimeToSomething.h", "tests/queries/0_stateless/02346_to_hour_monotonicity_fix.reference", "tests/queries/0_stateless/02346_to_hour_monotonicity_fix.sql"] | Index analysis doesn't work with toHour and Timezone condition | **Describe what's wrong**
Filter condition doesn't work correctly when functions like toHour are involved, I have another examples involved toDayOfWeek function as well where produced results are incorrect but I'll report in separate ticket about it.
**Does it reproduce on recent release?**
Yes, verified as part of v22.3-lts
**How to reproduce**
* Which ClickHouse server version to use 21.12.4.1
* Which interface to use, if matters: clickhouse-client
* Queries to run that lead to unexpected result and
##### Table definition
```
clickhouse-db-02.server.internal :) show create db.articles_ext_data;
SHOW CREATE TABLE db.articles_ext_data
Query id: 383c56fa-21e0-4dae-bc78-9eb9adfd03b2
[clickhouse-db-02.server.internal] 2022.06.23 08:19:47.831837 [ 65324 ] {383c56fa-21e0-4dae-bc78-9eb9adfd03b2} <Debug> executeQuery: (from 127.0.0.1:43080) show create db.articles_ext_data;
[clickhouse-db-02.server.internal] 2022.06.23 08:19:47.831935 [ 65324 ] {383c56fa-21e0-4dae-bc78-9eb9adfd03b2} <Trace> ContextAccess (default): Access granted: SHOW COLUMNS ON db.articles_ext_data
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE db.articles_ext_data
(
`internal_id` String,
`timestamp` Nullable(DateTime('UTC')),
`url` Nullable(String),
`data_provider` String,
`document_length` UInt32,
`domain_name` String,
`is_near_duplicate` UInt8,
`publish_date` DateTime('UTC'),
`lang` Nullable(String),
`frames.label` Array(String),
`frames.score` Array(Float64),
`frames.version` Array(UInt32),
`frames.role` Array(Array(String)),
`frames.value` Array(Array(String)),
`frames.entity_id` Array(Array(UInt32)),
`frames.salience_score` Array(Array(Float64)),
`tags.id` Array(UInt32),
`frames.num_mentions` Array(UInt32),
`tags.name` Array(String),
`tags.score` Array(Float64),
`tags.tagger` Array(String),
`tags.checksum` Array(String),
`tags.type` Array(String),
`kpis.entity_id` Array(UInt32),
`kpis.salience_score` Array(Float64),
`kpis.num_mentions` Array(UInt32)
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/replicated/db/articles_ext_data', 'clickhouse-db-02.server.internal')
PARTITION BY toYYYYMMDD(publish_date)
PRIMARY KEY cityHash64(internal_id)
ORDER BY cityHash64(internal_id)
SAMPLE BY cityHash64(internal_id)
SETTINGS index_granularity = 8192 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[clickhouse-db-02.server.internal] 2022.06.23 08:19:47.833999 [ 65324 ] {383c56fa-21e0-4dae-bc78-9eb9adfd03b2} <Information> executeQuery: Read 1 rows, 1.22 KiB in 0.002124662 sec., 470 rows/sec., 575.46 KiB/sec.
[clickhouse-db-02.server.internal] 2022.06.23 08:19:47.834029 [ 65324 ] {383c56fa-21e0-4dae-bc78-9eb9adfd03b2} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
1 rows in set. Elapsed: 0.003 sec.
```
```
# Working Use Case
SELECT
toHour(toTimeZone(publish_date, 'UTC')) AS toHour_UTC,
toHour(toTimeZone(publish_date, 'Asia/Jerusalem')) AS toHour_Israel
FROM db.articles_ext_data
WHERE (publish_date >= toTimeZone(toDateTime('2021-07-01 00:00:00'), 'Asia/Jerusalem')) AND (publish_date < toTimeZone(toDateTime('2021-09-30 23:59:59'), 'Asia/Jerusalem'))
HAVING toHour_UTC = 5
ORDER BY toHour_UTC DESC
LIMIT 10
Query id: 9032228d-a5ae-465b-985d-6cb5d8369ec8
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.827400 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> executeQuery: (from 127.0.0.1:43080) SELECT toHour(toTimeZone(publish_date, 'UTC')) AS toHour_UTC, toHour(toTimeZone(publish_date, 'Asia/Jerusalem')) AS toHour_Israel FROM db.articles_ext_data WHERE (publish_date >= toTimeZone(toDateTime('2021-07-01 00:00:00'), 'Asia/Jerusalem')) AND (publish_date < toTimeZone(toDateTime('2021-09-30 23:59:59'), 'Asia/Jerusalem')) HAVING toHour_UTC = 5 ORDER BY toHour_UTC DESC LIMIT 10;
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.828408 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Trace> ContextAccess (default): Access granted: SELECT(publish_date) ON db.articles_ext_data
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.829203 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.829851 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): Key condition: unknown, unknown, and, unknown, and
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.841597 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): MinMax index condition: (column 0 in [1625097600, +Inf)), (column 0 in (-Inf, 1633046398]), and, (toHour(toTimezone(column 0)) in [5, 5]), and
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.843542 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): Selected 117/3088 parts by partition key, 117 parts by primary key, 5858/5858 marks by primary key, 5858 marks to read from 117 ranges
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.845501 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): Reading approx. 6585652 rows with 32 streams
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.853734 [ 53567 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> MergingSortedTransform: Merge sorted 1 blocks, 10 rows in 0.007999958 sec., 1250.0065625344535 rows/sec., 15.63 KiB/sec
ββtoHour_UTCββ¬βtoHour_Israelββ
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
β 5 β 8 β
ββββββββββββββ΄ββββββββββββββββ
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.854394 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Information> executeQuery: Read 6585652 rows, 25.12 MiB in 0.026937251 sec., 244481220 rows/sec., 932.62 MiB/sec.
[clickhouse-db-02.server.internal] 2022.06.23 08:18:11.854421 [ 65324 ] {9032228d-a5ae-465b-985d-6cb5d8369ec8} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
10 rows in set. Elapsed: 0.028 sec. Processed 6.59 million rows, 26.34 MB (235.49 million rows/s., 941.98 MB/s.)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Not Working Use Case
SELECT
toHour(toTimeZone(publish_date, 'UTC')) AS toHour_UTC,
toHour(toTimeZone(publish_date, 'Asia/Jerusalem')) AS toHour_Israel
FROM db.articles_ext_data
WHERE (publish_date >= toTimeZone(toDateTime('2021-07-01 00:00:00'), 'Asia/Jerusalem')) AND (publish_date < toTimeZone(toDateTime('2021-09-30 23:59:59'), 'Asia/Jerusalem'))
HAVING toHour_Israel = 8
ORDER BY toHour_Israel DESC
LIMIT 10
Query id: c1c83e59-af83-40cc-b93e-d2a774186fa1
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.523436 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Debug> executeQuery: (from 127.0.0.1:43080) SELECT toHour(toTimeZone(publish_date, 'UTC')) AS toHour_UTC, toHour(toTimeZone(publish_date, 'Asia/Jerusalem')) AS toHour_Israel FROM db.articles_ext_data WHERE (publish_date >= toTimeZone(toDateTime('2021-07-01 00:00:00'), 'Asia/Jerusalem')) AND (publish_date < toTimeZone(toDateTime('2021-09-30 23:59:59'), 'Asia/Jerusalem')) HAVING toHour_Israel = 8 ORDER BY toHour_Israel DESC LIMIT 10
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.524450 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Trace> ContextAccess (default): Access granted: SELECT(publish_date) ON db.articles_ext_data
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.525283 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.526006 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): Key condition: unknown, unknown, and, unknown, and
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.537831 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): MinMax index condition: (column 0 in [1625097600, +Inf)), (column 0 in (-Inf, 1633046398]), and, (toHour(toTimezone(column 0)) in [8, 8]), and
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.537893 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Debug> db.articles_ext_data (bf32f1f5-ce1c-44e3-bf32-f1f5ce1c24e3) (SelectExecutor): Selected 0/3088 parts by partition key, 0 parts by primary key, 0/0 marks by primary key, 0 marks to read from 0 ranges
[clickhouse-db-02.server.internal] 2022.06.23 08:18:34.538562 [ 65324 ] {c1c83e59-af83-40cc-b93e-d2a774186fa1} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
Ok.
0 rows in set. Elapsed: 0.016 sec.
```
**Expected behavior**
The expected results should be identical to the first query response regardless of condition **toHour_UTC = 5 or toHour_Israel = 8** applies
**Additional context**
On small tables with syntenic data, I was unable to reproduce the issue, I also tried to copy data from one table to another with the same structure and after data migration completion the issue reproduces consistently.
Might be related to https://github.com/ClickHouse/ClickHouse/issues/10977
| https://github.com/ClickHouse/ClickHouse/issues/38333 | https://github.com/ClickHouse/ClickHouse/pull/39037 | ca2e27aaa2be06575725f2d9525d647f69075bce | 57a719bafdb23fb955540859fd76b81efad36d18 | "2022-06-23T08:30:32Z" | c++ | "2022-07-11T11:36:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,284 | ["docs/en/sql-reference/aggregate-functions/grouping_function.md"] | Add grouping function to documentation | GROUPING aggregate function is added to ClickHouse, need docs.
Dmitri, is this where you would want the docs to appear:

If we cover everything in these test files, is that what is needed?
```
tests/queries/0_stateless/02293_grouping_function.sql
tests/queries/0_stateless/02293_grouping_function_group_by.sql
```
I will get things going and we can iterate over it, Ok? | https://github.com/ClickHouse/ClickHouse/issues/38284 | https://github.com/ClickHouse/ClickHouse/pull/38308 | 6a69e0879923a059b28cb31e609d50730f47e03e | c9dea66f8d37c13e27678eeeec246c6a97e40e67 | "2022-06-21T16:52:55Z" | c++ | "2022-08-25T20:03:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,282 | ["src/Interpreters/InterpreterSelectQuery.cpp", "tests/queries/0_stateless/02344_distinct_limit_distiributed.reference", "tests/queries/0_stateless/02344_distinct_limit_distiributed.sql"] | incorrect result: distinct + distributed + limit | 21.8.13.6, 22.3.7.28, 22.5.1.2079, 22.6.1.1985
distinct + limit over distributed tables stops to read rows from the table before reaching the limit.
```sql
cat xy.sql
drop table if exists test;
create table test (d Date, id Int64 ) Engine = MergeTree partition by toYYYYMM(d) order by d;
insert into test select '2021-12-15', -1 from numbers(1e7);
insert into test select '2021-12-15', -1 from numbers(1e7);
insert into test select '2021-12-15', -1 from numbers(1e7);
insert into test select '2021-12-15', -1 from numbers(1e7);
insert into test select '2022-12-15', 1 from numbers(1e7);
insert into test select '2022-12-16', 11 from numbers(1);
insert into test select '2023-12-16', 12 from numbers(1);
insert into test select '2023-12-16', 13 from numbers(1);
insert into test select '2023-12-16', 14 from numbers(1);
select distinct id from remote('127.0.0.2,127.0.0.1', currentDatabase(),test) limit 10;
select '-----';
select distinct id from remote('127.0.0.2,127.0.0.1', currentDatabase(),test) ;
```
```
$ cat xy.sql |clickhouse-client -mn
1
-1
-----
-1
1
11
12
13
$ cat xy.sql |clickhouse-client -mn
1
-1
11
-----
1
-1
11
12
13
14
$ cat xy.sql |clickhouse-client -mn
-1
1
-----
-1
1
11
12
13
14
$ cat xy.sql |clickhouse-client -mn
1
-1
11
-----
-1
1
11
12
13
14
$ cat xy.sql |clickhouse-client -mn
1
-1
-----
-1
1
11
12
13
14
```
------
in real prod it looks like this
```sql
-- very fast, but incorrect
select distinct ext_io_id from csp_ad_fact_event where access_day > '2021-12-15' limit 10;
ββext_io_idββ
β -1 β
βββββββββββββ
1 rows in set. Elapsed: 0.062 sec. Processed 17.30 million rows, 172.98 MB (278.38 million rows/s., 2.78 GB/s.)
-- without limit
select distinct ext_io_id from csp_ad_fact_event where access_day > '2021-12-15';
ββext_io_idββ
β -1 β
βββββββββββββ
ββext_io_idββ
β 23120 β
βββββββββββββ
ββext_io_idββ
β 2704949 β
βββββββββββββ
3 rows in set. Elapsed: 25.510 sec. Processed 376.27 billion rows, 3.76 TB (14.75 billion rows/s., 147.50 GB/s.)
-- limit 100
select distinct ext_io_id from csp_ad_fact_event where access_day > '2021-12-15' limit 100;
ββext_io_idββ
β -1 β
βββββββββββββ
ββext_io_idββ
β 23120 β
βββββββββββββ
ββext_io_idββ
β 2704949 β
βββββββββββββ
3 rows in set. Elapsed: 26.331 sec. Processed 376.33 billion rows, 3.76 TB (14.29 billion rows/s., 142.92 GB/s.)
-- with order by
select distinct ext_io_id from csp_ad_fact_event where access_day > '2021-12-15' order by ext_io_id limit 10;
ββext_io_idββ
β -1 β
βββββββββββββ
ββext_io_idββ
β 23120 β
βββββββββββββ
ββext_io_idββ
β 2704949 β
βββββββββββββ
3 rows in set. Elapsed: 26.103 sec. Processed 376.36 billion rows, 3.76 TB (14.42 billion rows/s., 144.18 GB/s.)
```
Probably related to the optimization with small limits. | https://github.com/ClickHouse/ClickHouse/issues/38282 | https://github.com/ClickHouse/ClickHouse/pull/38371 | 8246e55002a34c8d88f3f6de28d757441660df12 | e78814f3bb843d11cb5d545558998a10845578ed | "2022-06-21T16:06:50Z" | c++ | "2022-06-29T12:02:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,134 | ["src/Interpreters/AsynchronousMetricLog.h"] | asynchronous_metric_log: `value Float64 CODEC(Gorilla, ZSTD(3))` is worse than simply `ZSTD(3)` | Maybe all the specialized codecs: DoubleDelta, Gorilla, etc. are obsolete and people should simply use ZSTD instead. | https://github.com/ClickHouse/ClickHouse/issues/38134 | https://github.com/ClickHouse/ClickHouse/pull/38428 | 1d661d0badaf823e1d4eb3d81eefaae5988b561f | 90307177b4f3d7e01be3c99e2c8276bf05ae02ed | "2022-06-16T11:14:30Z" | c++ | "2022-06-25T15:59:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,128 | ["tests/queries/0_stateless/02715_or_null.reference", "tests/queries/0_stateless/02715_or_null.sql"] | -OrNull combinator return unexpected values |
**Describe the unexpected behaviour**
A clear and concise description of what works not as it is supposed to.
**How to reproduce**
ClickHouse 22.3, 22.6
```
SELECT argMaxOrNull(id, timestamp)
FROM
(
SELECT
CAST(NULL, 'Nullable(UInt32)') AS id,
2 AS timestamp
)
Query id: 17bf3c85-de37-428f-990e-e82c78cc06fa
ββargMaxOrNull(id, timestamp)ββ
β 0 β <-- expected to have NULL here
βββββββββββββββββββββββββββββββ
SELECT
argMax(id, timestamp),
argMaxOrNull(id, timestamp)
FROM
(
SELECT
CAST(NULL, 'Nullable(UInt32)') AS id,
2 AS timestamp
UNION ALL
SELECT
1 AS id,
1 AS timestamp
)
Query id: 01180512-509b-481e-9833-1256cc4d01c1
ββargMax(id, timestamp)ββ¬βargMaxOrNull(id, timestamp)ββ
β 1 β 0 β <- expected to have 1 here as latest non-null value
βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ
Workaround for second issue:
SELECT argMaxIfOrNull(id, timestamp, id IS NOT NULL)
FROM
(
SELECT
CAST(NULL, 'Nullable(UInt32)') AS id,
2 AS timestamp
UNION ALL
SELECT
1 AS id,
1 AS timestamp
)
Query id: a3f72ba8-b17c-40b3-955e-22d311c0e428
ββargMaxOrNullIf(id, timestamp, isNotNull(id))ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββ
```
**Expected behavior**
For first query, it should return NULL
For second either return 1, either confirm that argMaxOrNull working as intended. | https://github.com/ClickHouse/ClickHouse/issues/38128 | https://github.com/ClickHouse/ClickHouse/pull/48817 | 54f660bd4fed8a711fdad5d91b7ceb4d940245f9 | fb40200302be136dc70c85149cdfc8dbf5857e94 | "2022-06-16T08:56:22Z" | c++ | "2023-04-16T22:15:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,049 | ["src/Databases/DatabasesCommon.cpp", "src/Interpreters/InterpreterCreateQuery.cpp", "src/Parsers/ASTCreateQuery.cpp", "src/Parsers/ASTCreateQuery.h", "src/Parsers/ParserCreateQuery.cpp", "tests/queries/0_stateless/02343_create_empty_as_select.reference", "tests/queries/0_stateless/02343_create_empty_as_select.sql"] | CREATE TABLE ... EMPTY AS SELECT | CREATE TABLE ... **EMPTY** AS SELECT
**Use case**
Create a table with the same structure as SELECT query returns but don't fill it.
**Describe alternatives you've considered**
CREATE TABLE ... AS DESCRIBE (SELECT ...)
\- more consistent but also more clunky. | https://github.com/ClickHouse/ClickHouse/issues/38049 | https://github.com/ClickHouse/ClickHouse/pull/38272 | d3bc7c0190fd8d670d4b2e2a1d3d7d332b1998ae | 31c4f83469c6b9b24d3b3e16f9b4b191bee0f6f2 | "2022-06-14T11:26:16Z" | c++ | "2022-06-22T10:25:12Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.