status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,018 | ["src/Common/logger_useful.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "tests/queries/0_stateless/02360_rename_table_along_with_log_name.reference", "tests/queries/0_stateless/02360_rename_table_along_with_log_name.sh"] | Wrong table name in logs after RENAME TABLE | **Describe what's wrong**
Wrong table name in logs after table `RENAME`
**Does it reproduce on recent release?**
Yes.
**How to reproduce**
22.6
```
CREATE TABLE table_test_a
(
`key` UInt32
)
ENGINE = MergeTree
ORDER BY key;
INSERT INTO table_test_a SELECT *
FROM numbers(1000);
RENAME TABLE table_test_a TO table_test_b;
SELECT *
FROM table_test_b
FORMAT `Null`;
[LAPTOP-] 2022.06.13 10:58:18.370192 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Debug> executeQuery: (from 127.0.0.1:58484) SELECT * FROM table_test_b FORMAT Null; (stage: Complete)
[LAPTOP-] 2022.06.13 10:58:18.370599 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Trace> ContextAccess (default): Access granted: SELECT(key) ON test.table_test_b
[LAPTOP-] 2022.06.13 10:58:18.370665 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Trace> ContextAccess (default): Access granted: SELECT(key) ON test.table_test_b
[LAPTOP-] 2022.06.13 10:58:18.370714 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[LAPTOP-] 2022.06.13 10:58:18.370783 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Debug> test.table_test_a (SelectExecutor): Key condition: unknown
[LAPTOP-] 2022.06.13 10:58:18.370817 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Debug> test.table_test_a (SelectExecutor): Selected 1/1 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges
[LAPTOP-] 2022.06.13 10:58:18.370847 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part all_1_1_0, approx. 1000 rows starting from 0
[LAPTOP-] 2022.06.13 10:58:18.371812 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Information> executeQuery: Read 1000 rows, 3.91 KiB in 0.0015354 sec., 651296 rows/sec., 2.48 MiB/sec.
[LAPTOP-] 2022.06.13 10:58:18.371956 [ 27745 ] {43b0dc5d-69e6-4e42-b96c-a13871b6a791} <Debug> MemoryTracker: Peak memory usage (for query): 45.10 KiB.
```
**Expected behavior**
Correct table name in logs
**Additional context**
Interesting, does it affect grants and etc (looks like no, in ContextAccess log it's using correct table name )
| https://github.com/ClickHouse/ClickHouse/issues/38018 | https://github.com/ClickHouse/ClickHouse/pull/39227 | 54abeb624a671f7439f9a612e7f13e5464b66d28 | c683cb252f84bf540067e4f173a1abf056f0881b | "2022-06-13T11:00:56Z" | c++ | "2022-07-26T15:12:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 38,003 | ["src/Client/ClientBase.cpp", "src/Common/ProgressIndication.cpp", "src/Common/ProgressIndication.h"] | CPU usage in client sometimes displaying -0.0 | Simply display zero if it's less or equals to 0. | https://github.com/ClickHouse/ClickHouse/issues/38003 | https://github.com/ClickHouse/ClickHouse/pull/38064 | 5e9e5a4eaf2babd321461069fa195bc857e2cd95 | b2f0c16f426f8ebf6ebb61873ff5a17430f5e0e8 | "2022-06-11T21:05:30Z" | c++ | "2022-06-15T01:14:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,983 | ["src/Common/AsynchronousMetrics.cpp"] | AsynchronousMetrics: add info about the total memory amount with respect to cgroup limits. | **Use case**
Better monitoring of ClickHouse in Kubernetes.
**Describe the solution you'd like**
Add a new metric. | https://github.com/ClickHouse/ClickHouse/issues/37983 | https://github.com/ClickHouse/ClickHouse/pull/45999 | 8465b6cdf0f1cc563fd70f299d7bb91a6dab86c3 | cd171bef7d875019be517ca7f5758d660c3202bd | "2022-06-10T13:59:09Z" | c++ | "2023-04-28T17:39:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,904 | ["src/Processors/QueryPlan/Optimizations/liftUpFunctions.cpp", "tests/queries/0_stateless/02336_sort_optimization_with_fill.reference", "tests/queries/0_stateless/02336_sort_optimization_with_fill.sql"] | Inconsistent result for subselect having ORDER BY WITH FILL when select is also ordered. | **Describe what's wrong**
Inconsistent result for subselect having ORDER BY WITH FILL when select is also ordered.
**Does it reproduce on recent release?**
yes
**How to reproduce**
```
SELECT x, s FROM (
SELECT 5 AS x, 'Hello' AS s ORDER BY x WITH FILL FROM 1 TO 10
);
ββxββ¬βsββββββ
β 1 β β
β 2 β β
β 3 β β
β 4 β β
β 5 β Hello β
βββββ΄ββββββββ
ββxββ¬βsββ
β 6 β β
β 7 β β
β 8 β β
β 9 β β
βββββ΄ββββ
SELECT x, s FROM (
SELECT 5 AS x, 'Hello' AS s ORDER BY x WITH FILL FROM 1 TO 10
) ORDER BY s;
ββxββ¬βsββββββ
β 5 β β
β 5 β β
β 5 β β
β 5 β β
β 5 β β
β 5 β β
β 5 β β
β 5 β β
β 5 β Hello β
βββββ΄ββββββββ
```
**Expected behavior**
`x` column should preserve values. | https://github.com/ClickHouse/ClickHouse/issues/37904 | https://github.com/ClickHouse/ClickHouse/pull/37959 | 6177eb0e835cf13c8116d4ae43847a4a552cc0a4 | 8f6fee76fb14c08ba9267b1bece2e505136ce5b2 | "2022-06-07T14:13:20Z" | c++ | "2022-06-10T16:58:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,900 | ["src/Interpreters/InterpreterInsertQuery.cpp", "tests/performance/views_max_insert_threads.xml", "tests/queries/0_stateless/01275_parallel_mv.reference", "tests/queries/0_stateless/01275_parallel_mv.sql", "tests/queries/0_stateless/01275_parallel_mv.sql.j2", "tests/queries/0_stateless/02350_views_max_insert_threads.reference", "tests/queries/0_stateless/02350_views_max_insert_threads.sql"] | max_insert_threads and materialized views performance degradation | test
```sql
DROP TABLE IF EXISTS t;
DROP TABLE IF EXISTS t_mv;
create table t (a UInt64) Engine = Null;
create materialized view t_mv Engine = Null AS select now() as ts, max(a) from t group by ts;
insert into t select * from numbers_mt(3000000000) settings max_threads = 16, max_insert_threads=16;
```
results:
```
21.8:
0 rows in set. Elapsed: 3.841 sec. Processed 3.00 billion rows, 24.00 GB (781.01 million rows/s., 6.25 GB/s.)
21.10:
0 rows in set. Elapsed: 3.844 sec. Processed 6.00 billion rows, 48.00 GB (1.56 billion rows/s., 12.49 GB/s.)
21.11:
0 rows in set. Elapsed: 17.717 sec. Processed 5.98 billion rows, 47.86 GB (337.70 million rows/s., 2.70 GB/s.)
21.12
0 rows in set. Elapsed: 18.569 sec. Processed 6.00 billion rows, 48.00 GB (323.12 million rows/s., 2.58 GB/s.)
22.1
0 rows in set. Elapsed: 19.124 sec. Processed 6.00 billion rows, 48.00 GB (313.74 million rows/s., 2.51 GB/s.)
22.2
0 rows in set. Elapsed: 19.939 sec. Processed 6.00 billion rows, 48.00 GB (300.92 million rows/s., 2.41 GB/s.)
22.3
0 rows in set. Elapsed: 18.727 sec. Processed 6.00 billion rows, 48.00 GB (320.39 million rows/s., 2.56 GB/s.)
22.4
0 rows in set. Elapsed: 19.698 sec. Processed 6.00 billion rows, 48.00 GB (304.60 million rows/s., 2.44 GB/s.)
22.5
0 rows in set. Elapsed: 22.723 sec. Processed 6.00 billion rows, 48.00 GB (264.05 million rows/s., 2.11 GB/s.)
```
SIde note: pay attention to processed rows and progress bar behavior (minor cosmetic issue). | https://github.com/ClickHouse/ClickHouse/issues/37900 | https://github.com/ClickHouse/ClickHouse/pull/38731 | 7f216d1b9914436d28e4e0e9a04d4b02c9f1cb38 | c7110123991f617069ee76ae509aef923907519a | "2022-06-07T09:52:37Z" | c++ | "2022-07-04T04:43:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,847 | ["CMakeLists.txt", "docker/packager/packager", "docs/en/development/continuous-integration.md", "docs/en/development/developer-instruction.md", "docs/ru/development/developer-instruction.md", "programs/CMakeLists.txt", "programs/main.cpp", "src/Common/config.h.in", "src/Storages/System/StorageSystemBuildOptions.generated.cpp.in", "tests/integration/CMakeLists.txt"] | Remove "CLICKHOUSE_SPLIT_BINARY" option. | People said it does not work. | https://github.com/ClickHouse/ClickHouse/issues/37847 | https://github.com/ClickHouse/ClickHouse/pull/39520 | 0f2177127b7bb1517e0acab815ded905f5ba1390 | 52d08d9db4c46e9f0a23a1913d3adac222630689 | "2022-06-03T23:00:53Z" | c++ | "2022-07-31T12:23:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,831 | ["docker/test/fuzzer/query-fuzzer-tweaks-users.xml"] | OOM in query with `if` and large `FixedString` | **How to reproduce**
`clickhouse benchmark -c16 <<< "SELECT if(number % 2 = 0, toFixedString(toString(number), 100000), '') FROM numbers(100000) format Null"` | https://github.com/ClickHouse/ClickHouse/issues/37831 | https://github.com/ClickHouse/ClickHouse/pull/45032 | c0cbb6a5cc9dc696a5dd7b70bd4a952dca566be0 | 32cc41f0ebc41b4cee425c823de5e051f0eaa888 | "2022-06-03T12:52:39Z" | c++ | "2023-01-08T06:41:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,815 | ["src/Storages/WindowView/StorageWindowView.cpp", "src/Storages/WindowView/StorageWindowView.h", "tests/queries/0_stateless/01076_window_view_alter_query_to.sh", "tests/queries/0_stateless/01077_window_view_alter_query_to_modify_source.sh", "tests/queries/0_stateless/01087_window_view_alter_query.sh", "tests/queries/0_stateless/01088_window_view_default_column.reference", "tests/queries/0_stateless/01088_window_view_default_column.sh"] | Block structure mismatch in function connect between CopyingDataToViewsTransform and PushingToWindowViewSink | https://s3.amazonaws.com/clickhouse-test-reports/0/58f8c8726595e54092cf8f30fed729a7ed1d3fd3/stress_test__undefined__actions_.html
```
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:23:48.748947 [ 13574 ] {e0d43260-253d-4ae4-b286-46b0f883c55e} <Fatal> : Logical error: 'Block structure mismatch in function connect between CopyingDataToViewsTransform and PushingToWindowViewSink stream: different names of columns:
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.154250 [ 50174 ] {} <Fatal> BaseDaemon: ########################################
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.155146 [ 50174 ] {} <Fatal> BaseDaemon: (version 22.6.1.1 (official build), build id: F6EB5249B406F1C9) (from thread 13574) (query_id: e0d43260-253d-4ae4-b286-46b0f883c55e) (query: insert into mt values ) Received signal Aborted (6)
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.170638 [ 50174 ] {} <Fatal> BaseDaemon:
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.170981 [ 50174 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f41f8dbd03b 0x7f41f8d9c859 0xfc35d2b 0xfc35f85 0x227cc964 0x227c5106 0x227ca323 0x24d93826 0x251b43ce 0x238c45d8 0x238c6908 0x23d69e34 0x23d6690a 0x24d24d98 0x24d41cf6 0x26be2e0c 0x26be32e5 0x26d61547 0x26d5ee2c 0x7f41f8f74609 0x7f41f8e99163
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.173551 [ 50174 ] {} <Fatal> BaseDaemon: 3. raise @ 0x7f41f8dbd03b in ?
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.173869 [ 50174 ] {} <Fatal> BaseDaemon: 4. abort @ 0x7f41f8d9c859 in ?
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.200944 [ 50174 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Common/Exception.cpp:47: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xfc35d2b in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.229140 [ 50174 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/Exception.cpp:70: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xfc35f85 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.264224 [ 50174 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Core/Block.cpp:35: void DB::onError<void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x227cc964 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.328316 [ 50174 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Core/Block.cpp:0: void DB::checkColumnStructure<void>(DB::ColumnWithTypeAndName const&, DB::ColumnWithTypeAndName const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool, int) @ 0x227c5106 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.386634 [ 50174 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Core/Block.cpp:98: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool) @ 0x227ca323 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.495700 [ 50174 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Processors/Port.cpp:0: DB::connect(DB::OutputPort&, DB::InputPort&) @ 0x24d93826 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.538192 [ 50174 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Processors/Transforms/buildPushingToViewsChain.cpp:375: DB::buildPushingToViewsChain(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IAST> const&, bool, DB::ThreadStatus*, std::__1::atomic<unsigned long>*, DB::Block const&) @ 0x251b43ce in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.784897 [ 50174 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:253: DB::InterpreterInsertQuery::buildChainImpl(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::Block const&, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) @ 0x238c45d8 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.904851 [ 50174 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:431: DB::InterpreterInsertQuery::execute() @ 0x238c6908 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:17.993259 [ 50174 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x23d69e34 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.167544 [ 50174 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Interpreters/executeQuery.cpp:1069: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x23d6690a in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.347373 [ 50174 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Server/TCPHandler.cpp:332: DB::TCPHandler::runImpl() @ 0x24d24d98 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.418500 [ 50174 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Server/TCPHandler.cpp:1775: DB::TCPHandler::run() @ 0x24d41cf6 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.434038 [ 50174 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x26be2e0c in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.453419 [ 50174 ] {} <Fatal> BaseDaemon: 19.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()(Poco::Net::TCPServerConnection*) const
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.454478 [ 50174 ] {} <Fatal> BaseDaemon: 19.2. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:315: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection> >::reset(Poco::Net::TCPServerConnection*)
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.454782 [ 50174 ] {} <Fatal> BaseDaemon: 19.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:269: ~unique_ptr
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.455114 [ 50174 ] {} <Fatal> BaseDaemon: 19. ../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x26be32e5 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.473774 [ 50174 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x26d61547 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.485870 [ 50174 ] {} <Fatal> BaseDaemon: 21.1. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >::assign(Poco::Runnable*)
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.487932 [ 50174 ] {} <Fatal> BaseDaemon: 21.2. inlined from ../contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >::operator=(Poco::Runnable*)
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.488981 [ 50174 ] {} <Fatal> BaseDaemon: 21. ../contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x26d5ee2c in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.489519 [ 50174 ] {} <Fatal> BaseDaemon: 22. ? @ 0x7f41f8f74609 in ?
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:18.489898 [ 50174 ] {} <Fatal> BaseDaemon: 23. __clone @ 0x7f41f8e99163 in ?
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:19.695315 [ 50174 ] {} <Fatal> BaseDaemon: Checksum of the binary: 666A22026EC8DDF6B378E56897950842, integrity check passed.
/var/log/clickhouse-server/clickhouse-server.err.log:2022.06.02 23:30:38.897334 [ 623 ] {} <Fatal> Application: Child process was terminated by signal 6.
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:23:48.748947 [ 13574 ] {e0d43260-253d-4ae4-b286-46b0f883c55e} <Fatal> : Logical error: 'Block structure mismatch in function connect between CopyingDataToViewsTransform and PushingToWindowViewSink stream: different names of columns:
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.154250 [ 50174 ] {} <Fatal> BaseDaemon: ########################################
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.155146 [ 50174 ] {} <Fatal> BaseDaemon: (version 22.6.1.1 (official build), build id: F6EB5249B406F1C9) (from thread 13574) (query_id: e0d43260-253d-4ae4-b286-46b0f883c55e) (query: insert into mt values ) Received signal Aborted (6)
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.170638 [ 50174 ] {} <Fatal> BaseDaemon:
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.170981 [ 50174 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f41f8dbd03b 0x7f41f8d9c859 0xfc35d2b 0xfc35f85 0x227cc964 0x227c5106 0x227ca323 0x24d93826 0x251b43ce 0x238c45d8 0x238c6908 0x23d69e34 0x23d6690a 0x24d24d98 0x24d41cf6 0x26be2e0c 0x26be32e5 0x26d61547 0x26d5ee2c 0x7f41f8f74609 0x7f41f8e99163
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.173551 [ 50174 ] {} <Fatal> BaseDaemon: 3. raise @ 0x7f41f8dbd03b in ?
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.173869 [ 50174 ] {} <Fatal> BaseDaemon: 4. abort @ 0x7f41f8d9c859 in ?
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.200944 [ 50174 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Common/Exception.cpp:47: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xfc35d2b in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.229140 [ 50174 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/Exception.cpp:70: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xfc35f85 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.264224 [ 50174 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Core/Block.cpp:35: void DB::onError<void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x227cc964 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.328316 [ 50174 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Core/Block.cpp:0: void DB::checkColumnStructure<void>(DB::ColumnWithTypeAndName const&, DB::ColumnWithTypeAndName const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool, int) @ 0x227c5106 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.386634 [ 50174 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Core/Block.cpp:98: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool) @ 0x227ca323 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.495700 [ 50174 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Processors/Port.cpp:0: DB::connect(DB::OutputPort&, DB::InputPort&) @ 0x24d93826 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.538192 [ 50174 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Processors/Transforms/buildPushingToViewsChain.cpp:375: DB::buildPushingToViewsChain(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IAST> const&, bool, DB::ThreadStatus*, std::__1::atomic<unsigned long>*, DB::Block const&) @ 0x251b43ce in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.784897 [ 50174 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:253: DB::InterpreterInsertQuery::buildChainImpl(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::Block const&, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) @ 0x238c45d8 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.904851 [ 50174 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:431: DB::InterpreterInsertQuery::execute() @ 0x238c6908 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:17.993259 [ 50174 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x23d69e34 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.167544 [ 50174 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Interpreters/executeQuery.cpp:1069: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x23d6690a in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.347373 [ 50174 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Server/TCPHandler.cpp:332: DB::TCPHandler::runImpl() @ 0x24d24d98 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.418500 [ 50174 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Server/TCPHandler.cpp:1775: DB::TCPHandler::run() @ 0x24d41cf6 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.434038 [ 50174 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x26be2e0c in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.453419 [ 50174 ] {} <Fatal> BaseDaemon: 19.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()(Poco::Net::TCPServerConnection*) const
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.454478 [ 50174 ] {} <Fatal> BaseDaemon: 19.2. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:315: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection> >::reset(Poco::Net::TCPServerConnection*)
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.454782 [ 50174 ] {} <Fatal> BaseDaemon: 19.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:269: ~unique_ptr
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.455114 [ 50174 ] {} <Fatal> BaseDaemon: 19. ../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x26be32e5 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.473774 [ 50174 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x26d61547 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.485870 [ 50174 ] {} <Fatal> BaseDaemon: 21.1. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >::assign(Poco::Runnable*)
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.487932 [ 50174 ] {} <Fatal> BaseDaemon: 21.2. inlined from ../contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >::operator=(Poco::Runnable*)
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.488981 [ 50174 ] {} <Fatal> BaseDaemon: 21. ../contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x26d5ee2c in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.489519 [ 50174 ] {} <Fatal> BaseDaemon: 22. ? @ 0x7f41f8f74609 in ?
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:18.489898 [ 50174 ] {} <Fatal> BaseDaemon: 23. __clone @ 0x7f41f8e99163 in ?
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:19.695315 [ 50174 ] {} <Fatal> BaseDaemon: Checksum of the binary: 666A22026EC8DDF6B378E56897950842, integrity check passed.
/var/log/clickhouse-server/clickhouse-server.stress.log:2022.06.02 23:30:38.897334 [ 623 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
cc: @Vxider, @kssenii | https://github.com/ClickHouse/ClickHouse/issues/37815 | https://github.com/ClickHouse/ClickHouse/pull/37965 | 384cef724285493893f837be881e079ee456c720 | a4e080d144f866823cbd5010fc99ff53f029d480 | "2022-06-03T08:53:10Z" | c++ | "2022-06-11T11:38:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,799 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md"] | Functions to estimate Bloom FIlter size | For configuring the parameters of `ngrambf_v1` we need to estimate the number of ngrams in the granule and perform some calculations.
It's good to add some bloom filter-related functions into ClickHouse.
These functions are just arithmetic expressions, so it should be possible to calculate them now, but it may be easier with some aliases.
It shouldn't be implemented with lots of new code. Even we can add some examples into the docs on how to create it using `CREATE FUNCTION bloomFilterEstimate AS (...) -> expression`
https://hur.st/bloomfilter/
| https://github.com/ClickHouse/ClickHouse/issues/37799 | https://github.com/ClickHouse/ClickHouse/pull/49169 | 6388be2062c746008165ef5f9a1cf66ccf63d373 | d3c7054bcff284ef5a9a022983d3cdea2bb83fda | "2022-06-02T15:04:22Z" | c++ | "2023-05-03T10:43:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,794 | ["src/Common/OvercommitTracker.cpp", "src/Common/OvercommitTracker.h"] | Deadlock in ProcessList | https://s3.amazonaws.com/clickhouse-test-reports/37740/002f95741a1363373ee91e3dca28aff3c83c8794/stress_test__memory__actions_.html
Looks like this (however, I cannot find the root cause in the stacktraces):
```
Thread 1 (Thread 0x7f026f965d40 (LWP 665)):
#0 0x00007f026fcbe110 in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f026fcb60a3 in pthread_mutex_lock () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x000000000b5cd285 in pthread_mutex_lock ()
#3 0x0000000058e40339 in std::__1::__libcpp_mutex_lock (__m=0x723000002380) at ../contrib/libcxx/include/__threading_support:303
#4 std::__1::mutex::lock (this=0x723000002380) at ../contrib/libcxx/src/mutex.cpp:33
#5 0x000000003f028849 in std::__1::lock_guard<std::__1::mutex>::lock_guard (this=0x7ffd03ff7650, __m=...) at ../contrib/libcxx/include/__mutex_base:91
#6 DB::ProcessList::killAllQueries (this=0x723000002380) at ../src/Interpreters/ProcessList.cpp:443
#7 0x000000000b666987 in DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::$_10::operator()() const (this=<optimized out>) at ../programs/server/Server.cpp:1728
#8 basic_scope_guard<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::$_10>::invoke() (this=<optimized out>) at ../base/base/../base/scope_guard.h:99
#9 basic_scope_guard<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::$_10>::~basic_scope_guard() (this=this@entry=0x7ffd03ff8e50) at ../base/base/../base/scope_guard.h:48
#10 0x000000000b652b04 in DB::Server::main (this=<optimized out>) at ../programs/server/Server.cpp:1786
#11 0x00000000531a1eb6 in Poco::Util::Application::run (this=<optimized out>) at ../contrib/poco/Util/src/Application.cpp:334
#12 0x000000000b618622 in DB::Server::run (this=<optimized out>) at ../programs/server/Server.cpp:463
#13 0x00000000531fc2de in Poco::Util::ServerApplication::run (this=0x7ffd03ff9250, argc=<optimized out>, argv=0x703000014d90) at ../contrib/poco/Util/src/ServerApplication.cpp:611
#14 0x000000000b610a8f in mainEntryClickHouseServer (argc=<optimized out>, argv=0x703000014d90) at ../programs/server/Server.cpp:187
#15 0x000000000b6096fe in main (argc_=<optimized out>, argv_=0xfffffffffffffef0) at ../programs/main.cpp:439
Thread 1390 (Thread 0x7efc59bdf700 (LWP 5395)):
#0 0x00007f026fcbe110 in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f026fcb60a3 in pthread_mutex_lock () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x000000000b5cd285 in pthread_mutex_lock ()
#3 0x0000000058e40339 in std::__1::__libcpp_mutex_lock (__m=0x723000002380) at ../contrib/libcxx/include/__threading_support:303
#4 std::__1::mutex::lock (this=0x723000002380) at ../contrib/libcxx/src/mutex.cpp:33
#5 0x000000003f0141c8 in std::__1::unique_lock<std::__1::mutex>::unique_lock (this=0x7efc59bd3358, __m=...) at ../contrib/libcxx/include/__mutex_base:119
#6 DB::ProcessList::insert (this=<optimized out>, query_=..., ast=<optimized out>, query_context=...) at ../src/Interpreters/ProcessList.cpp:84
#7 0x000000003f9cc467 in DB::executeQueryImpl (begin=<optimized out>, end=<optimized out>, context=..., internal=<optimized out>, stage=<optimized out>, istr=<optimized out>) at ../src/Interpreters/executeQuery.cpp:580
#8 0x000000003f9c5ed7 in DB::executeQuery (query=..., context=..., internal=<optimized out>, stage=<optimized out>) at ../src/Interpreters/executeQuery.cpp:1069
#9 0x00000000428ac31c in DB::TCPHandler::runImpl (this=<optimized out>) at ../src/Server/TCPHandler.cpp:332
#10 0x00000000428e7e4c in DB::TCPHandler::run (this=0x71a0031d2e00) at ../src/Server/TCPHandler.cpp:1781
#11 0x000000005315aa1c in Poco::Net::TCPServerConnection::start (this=0x723000002380) at ../contrib/poco/Net/src/TCPServerConnection.cpp:43
#12 0x000000005315be66 in Poco::Net::TCPServerDispatcher::run (this=<optimized out>) at ../contrib/poco/Net/src/TCPServerDispatcher.cpp:115
#13 0x0000000053878492 in Poco::PooledThread::run (this=0x7150003d65a0) at ../contrib/poco/Foundation/src/ThreadPool.cpp:199
#14 0x00000000538738c0 in Poco::(anonymous namespace)::RunnableHolder::run (this=<optimized out>) at ../contrib/poco/Foundation/src/Thread.cpp:55
#15 0x000000005386f756 in Poco::ThreadImpl::runnableEntry (pThread=<optimized out>) at ../contrib/poco/Foundation/src/Thread_POSIX.cpp:345
#16 0x00007f026fcb3609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#17 0x00007f026fbd8163 in clone () from /lib/x86_64-linux-gnu/libc.so.6
Thread 863 (Thread 0x7efe6a0b2700 (LWP 3014)):
#0 0x00007f026fcbe110 in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007f026fcb60a3 in pthread_mutex_lock () from /lib/x86_64-linux-gnu/libpthread.so.0
#2 0x000000000b5cd285 in pthread_mutex_lock ()
#3 0x0000000058e40339 in std::__1::__libcpp_mutex_lock (__m=0x723000002380) at ../contrib/libcxx/include/__threading_support:303
#4 std::__1::mutex::lock (this=0x723000002380) at ../contrib/libcxx/src/mutex.cpp:33
#5 0x000000003f022ca5 in std::__1::lock_guard<std::__1::mutex>::lock_guard (this=0x7efe6a0a83b0, __m=...) at ../contrib/libcxx/include/__mutex_base:91
#6 DB::ProcessListEntry::~ProcessListEntry (this=0x703001816428) at ../src/Interpreters/ProcessList.cpp:269
#7 0x000000003f03592d in std::__1::__destroy_at<DB::ProcessListEntry, 0> (__loc=0x723000002380) at ../contrib/libcxx/include/__memory/construct_at.h:56
#8 std::__1::destroy_at<DB::ProcessListEntry, 0> (__loc=0x723000002380) at ../contrib/libcxx/include/__memory/construct_at.h:81
#9 std::__1::allocator_traits<std::__1::allocator<DB::ProcessListEntry> >::destroy<DB::ProcessListEntry, void, void> (__p=0x723000002380) at ../contrib/libcxx/include/__memory/allocator_traits.h:317
#10 std::__1::__shared_ptr_emplace<DB::ProcessListEntry, std::__1::allocator<DB::ProcessListEntry> >::__on_zero_shared (this=<optimized out>) at ../contrib/libcxx/include/__memory/shared_ptr.h:310
#11 0x000000003babde63 in std::__1::__shared_count::__release_shared (this=0x703001816410) at ../contrib/libcxx/include/__memory/shared_ptr.h:174
#12 std::__1::__shared_weak_count::__release_shared (this=0x703001816410) at ../contrib/libcxx/include/__memory/shared_ptr.h:216
#13 std::__1::shared_ptr<DB::ProcessListEntry>::~shared_ptr (this=0x7efe6a0a84b0) at ../contrib/libcxx/include/__memory/shared_ptr.h:703
#14 std::__1::shared_ptr<DB::ProcessListEntry>::reset (this=0x71a0026402c0) at ../contrib/libcxx/include/__memory/shared_ptr.h:769
#15 DB::BlockIO::reset (this=0x71a0026402c0) at ../src/QueryPipeline/BlockIO.cpp:21
#16 DB::BlockIO::operator= (this=0x71a0026402c0, rhs=...) at ../src/QueryPipeline/BlockIO.cpp:32
#17 0x00000000428f0fa1 in DB::QueryState::operator= (this=<optimized out>, this@entry=0x71a0026401e0) at ../src/Server/TCPHandler.h:46
#18 0x00000000428acac0 in DB::QueryState::reset (this=0x71a0026401e0) at ../src/Server/TCPHandler.h:110
#19 DB::TCPHandler::runImpl (this=<optimized out>) at ../src/Server/TCPHandler.cpp:393
#20 0x00000000428e7e4c in DB::TCPHandler::run (this=0x71a002640000) at ../src/Server/TCPHandler.cpp:1781
#21 0x000000005315aa1c in Poco::Net::TCPServerConnection::start (this=0x723000002380) at ../contrib/poco/Net/src/TCPServerConnection.cpp:43
#22 0x000000005315be66 in Poco::Net::TCPServerDispatcher::run (this=<optimized out>) at ../contrib/poco/Net/src/TCPServerDispatcher.cpp:115
#23 0x0000000053878492 in Poco::PooledThread::run (this=0x7150002b4d20) at ../contrib/poco/Foundation/src/ThreadPool.cpp:199
#24 0x00000000538738c0 in Poco::(anonymous namespace)::RunnableHolder::run (this=<optimized out>) at ../contrib/poco/Foundation/src/Thread.cpp:55
#25 0x000000005386f756 in Poco::ThreadImpl::runnableEntry (pThread=<optimized out>) at ../contrib/poco/Foundation/src/Thread_POSIX.cpp:345
#26 0x00007f026fcb3609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#27 0x00007f026fbd8163 in clone () from /lib/x86_64-linux-gnu/libc.so.6
``` | https://github.com/ClickHouse/ClickHouse/issues/37794 | https://github.com/ClickHouse/ClickHouse/pull/39030 | f00069fdb69df8162302a36dffc50f1cbd2e7927 | d222f238f02c79863cce7cc95ec19c68905e946a | "2022-06-02T13:12:10Z" | c++ | "2022-07-09T11:47:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,739 | ["src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeQueue.h", "src/Storages/StorageReplicatedMergeTree.cpp"] | Possible flaky 01154_move_partition_long | https://s3.amazonaws.com/clickhouse-test-reports/37650/217f492264f5526a2372f95c9144be162934a34c/stateless_tests__thread__actions__[2/3].html
<details>
<summary>Error message</summary>
```
2022-05-30 10:25:56 [e89d17b24d77] 2022.05.30 10:25:52.466531 [ 40175 ] {316371e3-cd4c-4c1b-9755-06affc23d6bb} InterpreterSystemQuery: SYNC REPLICA test_8r9vin.src_10 (9e000aca-17ed-4ac9-ba47-28468bcf5a58): Timed out!
2022-05-30 10:25:56 [e89d17b24d77] 2022.05.30 10:25:52.467761 [ 40175 ] {316371e3-cd4c-4c1b-9755-06affc23d6bb} executeQuery: Code: 159. DB::Exception: SYNC REPLICA test_8r9vin.src_10 (9e000aca-17ed-4ac9-ba47-28468bcf5a58): command timed out. See the 'receive_timeout' setting. (TIMEOUT_EXCEEDED) (version 22.6.1.1) (from [::1]:36136) (comment: 01154_move_partition_long.sh) (in query: SYSTEM SYNC REPLICA src_10), Stack trace (when copying this message, always include the lines below):
2022-05-30 10:25:56
2022-05-30 10:25:56 0. ./build_docker/../contrib/libcxx/include/exception:0: Poco::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) @ 0x20897afb in /usr/bin/clickhouse
2022-05-30 10:25:56 1. ./build_docker/../src/Common/Exception.cpp:69: DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xb23001c in /usr/bin/clickhouse
2022-05-30 10:25:56 2. ./build_docker/../contrib/libcxx/include/string:1445: DB::Exception::Exception, std::__1::allocator > >(int, fmt::v8::basic_format_string, std::__1::allocator > >::type>, std::__1::basic_string, std::__1::allocator >&&) @ 0xb331e88 in /usr/bin/clickhouse
2022-05-30 10:25:56 3. ./build_docker/../src/Interpreters/InterpreterSystemQuery.cpp:0: DB::InterpreterSystemQuery::syncReplica(DB::ASTSystemQuery&) @ 0x1a87873f in /usr/bin/clickhouse
2022-05-30 10:25:56 4. ./build_docker/../src/Interpreters/InterpreterSystemQuery.cpp:441: DB::InterpreterSystemQuery::execute() @ 0x1a8726ae in /usr/bin/clickhouse
2022-05-30 10:25:56 5. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x1abc6122 in /usr/bin/clickhouse
2022-05-30 10:25:56 6. ./build_docker/../src/Interpreters/executeQuery.cpp:1069: DB::executeQuery(std::__1::basic_string, std::__1::allocator > const&, std::__1::shared_ptr, bool, DB::QueryProcessingStage::Enum) @ 0x1abc37c2 in /usr/bin/clickhouse
2022-05-30 10:25:56 7. ./build_docker/../src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x1ba013e7 in /usr/bin/clickhouse
2022-05-30 10:25:56 8. ./build_docker/../src/Server/TCPHandler.cpp:1783: DB::TCPHandler::run() @ 0x1ba12088 in /usr/bin/clickhouse
2022-05-30 10:25:56 9. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x206d40a3 in /usr/bin/clickhouse
2022-05-30 10:25:56 10. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: Poco::Net::TCPServerDispatcher::run() @ 0x206d4913 in /usr/bin/clickhouse
2022-05-30 10:25:56 11. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x209399f6 in /usr/bin/clickhouse
2022-05-30 10:25:56 12. ./build_docker/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x20937b90 in /usr/bin/clickhouse
2022-05-30 10:25:56 13. ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:277: Poco::ThreadImpl::runnableEntry(void*) @ 0x20936208 in /usr/bin/clickhouse
2022-05-30 10:25:56 14. __tsan_thread_start_func @ 0xb17503d in /usr/bin/clickhouse
2022-05-30 10:25:56 15. ? @ 0x7f398fdd0609 in ?
2022-05-30 10:25:56 16. __clone @ 0x7f398fcf5163 in ?
2022-05-30 10:25:56
2022-05-30 10:25:56 Received exception from server (version 22.6.1):
2022-05-30 10:25:56 Code: 159. DB::Exception: Received from localhost:9000. DB::Exception: SYNC REPLICA test_8r9vin.src_10 (9e000aca-17ed-4ac9-ba47-28468bcf5a58): command timed out. See the 'receive_timeout' setting. (TIMEOUT_EXCEEDED)
2022-05-30 10:25:56 (query: SYSTEM SYNC REPLICA src_10)
2022-05-30 10:25:56 [e89d17b24d77] 2022.05.30 10:25:52.467488 [ 40168 ] {c2df5ebd-2ad5-48c6-bdae-b39e741bd64c} InterpreterSystemQuery: SYNC REPLICA test_8r9vin.src_13 (434057b5-5e94-4aef-97b9-439c39674883): Timed out!
2022-05-30 10:25:56 [e89d17b24d77] 2022.05.30 10:25:52.468703 [ 40168 ] {c2df5ebd-2ad5-48c6-bdae-b39e741bd64c} executeQuery: Code: 159. DB::Exception: SYNC REPLICA test_8r9vin.src_13 (434057b5-5e94-4aef-97b9-439c39674883): command timed out. See the 'receive_timeout' setting. (TIMEOUT_EXCEEDED) (version 22.6.1.1) (from [::1]:36138) (comment: 01154_move_partition_long.sh) (in query: SYSTEM SYNC REPLICA src_13), Stack trace (when copying this message, always include the lines below):
2022-05-30 10:25:56
2022-05-30 10:25:56 0. ./build_docker/../contrib/libcxx/include/exception:0: Poco::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) @ 0x20897afb in /usr/bin/clickhouse
2022-05-30 10:25:56 1. ./build_docker/../src/Common/Exception.cpp:69: DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xb23001c in /usr/bin/clickhouse
2022-05-30 10:25:56 2. ./build_docker/../contrib/libcxx/include/string:1445: DB::Exception::Exception, std::__1::allocator > >(int, fmt::v8::basic_format_string, std::__1::allocator > >::type>, std::__1::basic_string, std::__1::allocator >&&) @ 0xb331e88 in /usr/bin/clickhouse
2022-05-30 10:25:56 3. ./build_docker/../src/Interpreters/InterpreterSystemQuery.cpp:0: DB::InterpreterSystemQuery::syncReplica(DB::ASTSystemQuery&) @ 0x1a87873f in /usr/bin/clickhouse
2022-05-30 10:25:56 4. ./build_docker/../src/Interpreters/InterpreterSystemQuery.cpp:441: DB::InterpreterSystemQuery::execute() @ 0x1a8726ae in /usr/bin/clickhouse
2022-05-30 10:25:56 5. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x1abc6122 in /usr/bin/clickhouse
2022-05-30 10:25:56 6. ./build_docker/../src/Interpreters/executeQuery.cpp:1069: DB::executeQuery(std::__1::basic_string, std::__1::allocator > const&, std::__1::shared_ptr, bool, DB::QueryProcessingStage::Enum) @ 0x1abc37c2 in /usr/bin/clickhouse
2022-05-30 10:25:56 7. ./build_docker/../src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x1ba013e7 in /usr/bin/clickhouse
2022-05-30 10:25:56 8. ./build_docker/../src/Server/TCPHandler.cpp:1783: DB::TCPHandler::run() @ 0x1ba12088 in /usr/bin/clickhouse
2022-05-30 10:25:56 9. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x206d40a3 in /usr/bin/clickhouse
2022-05-30 10:25:56 10. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: Poco::Net::TCPServerDispatcher::run() @ 0x206d4913 in /usr/bin/clickhouse
2022-05-30 10:25:56 11. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x209399f6 in /usr/bin/clickhouse
2022-05-30 10:25:56 12. ./build_docker/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x20937b90 in /usr/bin/clickhouse
2022-05-30 10:25:56 13. ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:277: Poco::ThreadImpl::runnableEntry(void*) @ 0x20936208 in /usr/bin/clickhouse
2022-05-30 10:25:56 14. __tsan_thread_start_func @ 0xb17503d in /usr/bin/clickhouse
2022-05-30 10:25:56 15. ? @ 0x7f398fdd0609 in ?
2022-05-30 10:25:56 16. __clone @ 0x7f398fcf5163 in ?
2022-05-30 10:25:56
2022-05-30 10:25:56 Received exception from server (version 22.6.1):
2022-05-30 10:25:56 Code: 159. DB::Exception: Received from localhost:9000. DB::Exception: SYNC REPLICA test_8r9vin.src_13 (434057b5-5e94-4aef-97b9-439c39674883): command timed out. See the 'receive_timeout' setting. (TIMEOUT_EXCEEDED)
2022-05-30 10:25:56 (query: SYSTEM SYNC REPLICA src_13)
2022-05-30 10:25:56
2022-05-30 10:25:56 stdout:
2022-05-30 10:25:56 Replication did not hang: synced all replicas of dst_
2022-05-30 10:25:56 Consistency: 1
2022-05-30 10:25:56 sync failed, queue: default src_10 r1_1340819367 1 queue-0000000354 REPLACE_RANGE 2022-05-30 10:18:37 0 r1_3691680433 [] 0 0 698 Code: 234. DB::Exception: Not found part 0_36_36_1 (or part covering it) neither source table neither remote replicas. (NO_REPLICA_HAS_PART) (version 22.6.1.1) 2022-05-30 10:25:52 22 Not executing log entry queue-0000000354 for part 0_37_37_1 because it is covered by part 0_37_42_2 that is currently executing. 2022-05-30 10:18:48
2022-05-30 10:25:56 sync failed, queue: default src_10 r1_1340819367 0 queue-0000000355 DROP_RANGE 2022-05-30 10:18:37 0 r1_3691680433 0_36_36_1 [] 0 0 0 1969-12-31 18:00:00 1910 Not executing log entry queue-0000000355 of type DROP_RANGE for part 0_36_36_1 because another DROP_RANGE or REPLACE_RANGE entry are currently executing. 2022-05-30 10:25:52
2022-05-30 10:25:56 sync failed, queue: default src_13 r1_75488309 1 queue-0000000354 REPLACE_RANGE 2022-05-30 10:18:37 0 r1_3691680433 [] 0 0 710 Code: 234. DB::Exception: Not found part 0_36_36_1 (or part covering it) neither source table neither remote replicas. (NO_REPLICA_HAS_PART) (version 22.6.1.1) 2022-05-30 10:25:52 13 Not executing log entry queue-0000000354 for part 0_37_37_1 because it is covered by part 0_37_42_2 that is currently executing. 2022-05-30 10:18:48
2022-05-30 10:25:56 sync failed, queue: default src_13 r1_75488309 0 queue-0000000355 DROP_RANGE 2022-05-30 10:18:37 0 r1_3691680433 0_36_36_1 [] 0 0 0 1969-12-31 18:00:00 1991 Not executing log entry queue-0000000355 of type DROP_RANGE for part 0_36_36_1 because another DROP_RANGE or REPLACE_RANGE entry are currently executing. 2022-05-30 10:25:52
2022-05-30 10:25:56 Replication did not hang: synced all replicas of src_
2022-05-30 10:25:56
2022-05-30 10:25:56
2022-05-30 10:25:56 Settings used in the test: --max_insert_threads=0 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=1 --fsync_metadata=1 --output_format_parallel_formatting=0 --input_format_parallel_parsing=1 --min_chunk_bytes_for_parallel_parsing=8615318 --max_read_buffer_size=796245 --prefer_localhost_replica=1 --max_block_size=21183 --max_threads=42 --optimize_or_like_chain=0
2022-05-30 10:25:56
2022-05-30 10:25:56 Database: test_8r9vin
```
</details> | https://github.com/ClickHouse/ClickHouse/issues/37739 | https://github.com/ClickHouse/ClickHouse/pull/37763 | 585930f06a96da5f94afa1205960f96cf7d5f037 | 45657d203a434ae0818558c4f836d64e7a67280a | "2022-06-01T11:42:38Z" | c++ | "2022-06-02T11:15:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,735 | ["src/Functions/FunctionHashID.h", "tests/queries/0_stateless/02293_hashid.reference", "tests/queries/0_stateless/02293_hashid.sql"] | FunctionHashID: reference binding to null pointer of type 'const DB::IColumn' | https://s3.amazonaws.com/clickhouse-test-reports/0/154cae43562f96a12936fd26e04e5fcf689ed9ba/fuzzer_astfuzzerubsan,actions//report.html
```
2022.06.01 13:01:01.170312 [ 437 ] {} <Fatal> BaseDaemon: ########################################
2022.06.01 13:01:01.170414 [ 437 ] {} <Fatal> BaseDaemon: (version 22.6.1.1 (official build), build id: C85AC48A36A9E725) (from thread 175) (query_id: a50b5ac8-3141-459b-90dc-90d90df5c4ad) (query: SELECT hashid('1', '0.0000001023', NULL), NULL, hashid(256, hashid(9223372036854775806, '0.02'), '2.56'), 65535, hashid('-21474836.48', NULL)) Received signal Unknown signal (-3)
2022.06.01 13:01:01.170450 [ 437 ] {} <Fatal> BaseDaemon: Sanitizer trap.
2022.06.01 13:01:01.170506 [ 437 ] {} <Fatal> BaseDaemon: Stack trace: 0xfc5b7aa 0x223ee551 0xfc11456 0xfc1de1f 0x16829610 0x2256c225 0x2256b97e 0x2256c38f 0x22d34457 0x23199f88 0x231af4f2 0x231a1d50 0x231a6e8c 0x2317a8bc 0x23164899 0x2316eae2 0x23173ef9 0x238f017b 0x238e3655 0x238dca60 0x238d915c 0x23955e1c 0x23952ee7 0x23950c81 0x2388ee28 0x2388d87e 0x23d62232 0x23d5f4ca 0x24d19bf8 0x24d36c36 0x26bd790c 0x26bd7de5 0x26d56047 0x26d5392c 0x7f89a31ff609 0x7f89a3124163
2022.06.01 13:01:01.178861 [ 437 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:305: StackTrace::tryCapture()
2022.06.01 13:01:01.178885 [ 437 ] {} <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:266: StackTrace::StackTrace() @ 0xfc5b7aa in /workspace/clickhouse
2022.06.01 13:01:01.197471 [ 437 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:0: sanitizerDeathCallback() @ 0x223ee551 in /workspace/clickhouse
2022.06.01 13:01:02.179782 [ 437 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xfc11456 in /workspace/clickhouse
2022.06.01 13:01:03.146077 [ 437 ] {} <Fatal> BaseDaemon: 3. ? @ 0xfc1de1f in /workspace/clickhouse
2022.06.01 13:01:04.114631 [ 437 ] {} <Fatal> BaseDaemon: 4. DB::FunctionHashID::getReturnTypeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x16829610 in /workspace/clickhouse
2022.06.01 13:01:04.128425 [ 437 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Functions/IFunction.cpp:478: DB::IFunctionOverloadResolver::getReturnTypeWithoutLowCardinality(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x2256c225 in /workspace/clickhouse
2022.06.01 13:01:04.141670 [ 437 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Functions/IFunction.cpp:424: DB::IFunctionOverloadResolver::getReturnType(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x2256b97e in /workspace/clickhouse
2022.06.01 13:01:04.155275 [ 437 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Functions/IFunction.cpp:439: DB::IFunctionOverloadResolver::build(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x2256c38f in /workspace/clickhouse
2022.06.01 13:01:04.329290 [ 437 ] {} <Fatal> BaseDaemon: 8.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/shared_ptr.h:617: shared_ptr
2022.06.01 13:01:04.329340 [ 437 ] {} <Fatal> BaseDaemon: 8.2. inlined from ../contrib/libcxx/include/__memory/shared_ptr.h:724: std::__1::shared_ptr<DB::IFunctionBase>::operator=(std::__1::shared_ptr<DB::IFunctionBase>&&)
2022.06.01 13:01:04.329387 [ 437 ] {} <Fatal> BaseDaemon: 8. ../src/Interpreters/ActionsDAG.cpp:186: DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x22d34457 in /workspace/clickhouse
2022.06.01 13:01:04.377968 [ 437 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Interpreters/ActionsVisitor.cpp:0: DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x23199f88 in /workspace/clickhouse
2022.06.01 13:01:04.431397 [ 437 ] {} <Fatal> BaseDaemon: 10.1. inlined from ./build_docker/../contrib/libcxx/include/string:1445: std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__is_long() const
2022.06.01 13:01:04.431433 [ 437 ] {} <Fatal> BaseDaemon: 10.2. inlined from ../contrib/libcxx/include/string:2231: ~basic_string
2022.06.01 13:01:04.431471 [ 437 ] {} <Fatal> BaseDaemon: 10. ../src/Interpreters/ActionsVisitor.h:180: DB::ActionsMatcher::Data::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x231af4f2 in /workspace/clickhouse
2022.06.01 13:01:04.481055 [ 437 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Interpreters/ActionsVisitor.cpp:0: DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x231a1d50 in /workspace/clickhouse
2022.06.01 13:01:04.532756 [ 437 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../contrib/libcxx/include/vector:0: DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x231a6e8c in /workspace/clickhouse
2022.06.01 13:01:04.587892 [ 437 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:43: DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x2317a8bc in /workspace/clickhouse
2022.06.01 13:01:04.643294 [ 437 ] {} <Fatal> BaseDaemon: 14.1. inlined from ./build_docker/../src/Interpreters/ActionsVisitor.h:185: DB::ActionsMatcher::Data::getActions()
2022.06.01 13:01:04.643340 [ 437 ] {} <Fatal> BaseDaemon: 14. ../src/Interpreters/ExpressionAnalyzer.cpp:615: DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x23164899 in /workspace/clickhouse
2022.06.01 13:01:04.704922 [ 437 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Interpreters/ExpressionAnalyzer.cpp:0: DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x2316eae2 in /workspace/clickhouse
2022.06.01 13:01:04.768851 [ 437 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Interpreters/ExpressionAnalyzer.cpp:0: DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x23173ef9 in /workspace/clickhouse
2022.06.01 13:01:04.857395 [ 437 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:692: DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x238f017b in /workspace/clickhouse
2022.06.01 13:01:04.942067 [ 437 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:552: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >)::$_1::operator()(bool) const @ 0x238e3655 in /workspace/clickhouse
2022.06.01 13:01:05.024287 [ 437 ] {} <Fatal> BaseDaemon: 19. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) @ 0x238dca60 in /workspace/clickhouse
2022.06.01 13:01:05.106188 [ 437 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:165: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x238d915c in /workspace/clickhouse
2022.06.01 13:01:05.139970 [ 437 ] {} <Fatal> BaseDaemon: 21. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:725: std::__1::__unique_if<DB::InterpreterSelectQuery>::__unique_single std::__1::make_unique<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&>(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x23955e1c in /workspace/clickhouse
2022.06.01 13:01:05.167258 [ 437 ] {} <Fatal> BaseDaemon: 22.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/compressed_pair.h:48: __compressed_pair_elem<DB::InterpreterSelectQuery *, void>
2022.06.01 13:01:05.167289 [ 437 ] {} <Fatal> BaseDaemon: 22.2. inlined from ../contrib/libcxx/include/__memory/compressed_pair.h:130: __compressed_pair<DB::InterpreterSelectQuery *, std::__1::default_delete<DB::InterpreterSelectQuery> >
2022.06.01 13:01:05.167328 [ 437 ] {} <Fatal> BaseDaemon: 22.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:220: unique_ptr<DB::InterpreterSelectQuery, std::__1::default_delete<DB::InterpreterSelectQuery>, void, void>
2022.06.01 13:01:05.167368 [ 437 ] {} <Fatal> BaseDaemon: 22. ../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:227: DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x23952ee7 in /workspace/clickhouse
2022.06.01 13:01:05.193772 [ 437 ] {} <Fatal> BaseDaemon: 23. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x23950c81 in /workspace/clickhouse
2022.06.01 13:01:05.205894 [ 437 ] {} <Fatal> BaseDaemon: 24. ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:0: std::__1::__unique_if<DB::InterpreterSelectWithUnionQuery>::__unique_single std::__1::make_unique<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&>(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&) @ 0x2388ee28 in /workspace/clickhouse
2022.06.01 13:01:05.216542 [ 437 ] {} <Fatal> BaseDaemon: 25.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/compressed_pair.h:48: __compressed_pair_elem<DB::InterpreterSelectWithUnionQuery *, void>
2022.06.01 13:01:05.216568 [ 437 ] {} <Fatal> BaseDaemon: 25.2. inlined from ../contrib/libcxx/include/__memory/compressed_pair.h:130: __compressed_pair<DB::InterpreterSelectWithUnionQuery *, std::__1::default_delete<DB::InterpreterSelectWithUnionQuery> >
2022.06.01 13:01:05.216607 [ 437 ] {} <Fatal> BaseDaemon: 25.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:220: unique_ptr<DB::InterpreterSelectWithUnionQuery, std::__1::default_delete<DB::InterpreterSelectWithUnionQuery>, void, void>
2022.06.01 13:01:05.216643 [ 437 ] {} <Fatal> BaseDaemon: 25. ../src/Interpreters/InterpreterFactory.cpp:122: DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x2388d87e in /workspace/clickhouse
2022.06.01 13:01:05.259759 [ 437 ] {} <Fatal> BaseDaemon: 26. ./build_docker/../src/Interpreters/executeQuery.cpp:660: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x23d62232 in /workspace/clickhouse
2022.06.01 13:01:05.306166 [ 437 ] {} <Fatal> BaseDaemon: 27. ./build_docker/../src/Interpreters/executeQuery.cpp:1069: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x23d5f4ca in /workspace/clickhouse
2022.06.01 13:01:05.344488 [ 437 ] {} <Fatal> BaseDaemon: 28. ./build_docker/../src/Server/TCPHandler.cpp:332: DB::TCPHandler::runImpl() @ 0x24d19bf8 in /workspace/clickhouse
2022.06.01 13:01:05.391874 [ 437 ] {} <Fatal> BaseDaemon: 29. ./build_docker/../src/Server/TCPHandler.cpp:1783: DB::TCPHandler::run() @ 0x24d36c36 in /workspace/clickhouse
2022.06.01 13:01:05.396269 [ 437 ] {} <Fatal> BaseDaemon: 30. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x26bd790c in /workspace/clickhouse
2022.06.01 13:01:05.403249 [ 437 ] {} <Fatal> BaseDaemon: 31.1. inlined from ./build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:54: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()(Poco::Net::TCPServerConnection*) const
2022.06.01 13:01:05.403277 [ 437 ] {} <Fatal> BaseDaemon: 31.2. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:315: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection> >::reset(Poco::Net::TCPServerConnection*)
2022.06.01 13:01:05.403313 [ 437 ] {} <Fatal> BaseDaemon: 31.3. inlined from ../contrib/libcxx/include/__memory/unique_ptr.h:269: ~unique_ptr
2022.06.01 13:01:05.403349 [ 437 ] {} <Fatal> BaseDaemon: 31. ../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x26bd7de5 in /workspace/clickhouse
2022.06.01 13:01:05.410419 [ 437 ] {} <Fatal> BaseDaemon: 32. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x26d56047 in /workspace/clickhouse
2022.06.01 13:01:05.417331 [ 437 ] {} <Fatal> BaseDaemon: 33.1. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >::assign(Poco::Runnable*)
2022.06.01 13:01:05.417358 [ 437 ] {} <Fatal> BaseDaemon: 33.2. inlined from ../contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable> >::operator=(Poco::Runnable*)
2022.06.01 13:01:05.417391 [ 437 ] {} <Fatal> BaseDaemon: 33. ../contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x26d5392c in /workspace/clickhouse
2022.06.01 13:01:05.417432 [ 437 ] {} <Fatal> BaseDaemon: 34. ? @ 0x7f89a31ff609 in ?
2022.06.01 13:01:05.417471 [ 437 ] {} <Fatal> BaseDaemon: 35. __clone @ 0x7f89a3124163 in ?
2022.06.01 13:01:05.704804 [ 437 ] {} <Fatal> BaseDaemon: Checksum of the binary: 79FFE3C7CA8806ECAB1A5C2D9D4F5C3C, integrity check passed.
```
#37013
cc: @mnutt, @yakov-olkhovskiy
| https://github.com/ClickHouse/ClickHouse/issues/37735 | https://github.com/ClickHouse/ClickHouse/pull/37742 | 6a516099152c8c14ffbcc6ffdc79c46077918889 | 670c721ded3ea4f3a1abaa39908f637242694d03 | "2022-06-01T10:41:21Z" | c++ | "2022-06-02T10:47:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,728 | ["docs/en/sql-reference/functions/files.md", "docs/ru/sql-reference/functions/files.md", "src/Functions/FunctionFile.cpp", "tests/queries/0_stateless/02357_file_default_value.reference", "tests/queries/0_stateless/02357_file_default_value.sql", "tests/queries/0_stateless/02358_file_default_value.reference", "tests/queries/0_stateless/02358_file_default_value.sh"] | Ordinary function `file`: it should take optional second argument "default" to return when file does not exist. | **Use case**
#18842
**Describe the solution you'd like**
`SELECT file('nonexisting')` - throws exception;
`SELECT file('nonexisting', '')` - returns empty string;
`SELECT file('nonexisting', NULL)` - returns NULL.
| https://github.com/ClickHouse/ClickHouse/issues/37728 | https://github.com/ClickHouse/ClickHouse/pull/39218 | 91c0b9476889c074ca1388de867febde5ce51dd5 | dfdfabec947065dd4939d8ca92e984212eba0f31 | "2022-06-01T09:54:33Z" | c++ | "2022-08-01T11:04:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,727 | ["src/Common/ColumnsHashing.h", "tests/queries/0_stateless/02316_const_string_intersact.reference", "tests/queries/0_stateless/02316_const_string_intersact.sql"] | `SELECT 'Play ClickHouse' InterSect SELECT 'Play ClickHouse'` makes null pointer dereference. | null | https://github.com/ClickHouse/ClickHouse/issues/37727 | https://github.com/ClickHouse/ClickHouse/pull/37738 | e23cec01d525dc7423d16144319fa432274ba5b9 | 89638de521ae1b561dbc74936d5b9402ec5dbd37 | "2022-06-01T09:49:17Z" | c++ | "2022-06-01T16:31:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,718 | [".github/workflows/backport_branches.yml", ".github/workflows/release.yml", ".github/workflows/release_branches.yml", "tests/ci/ast_fuzzer_check.py", "tests/ci/build_download_helper.py", "tests/ci/download_binary.py", "tests/ci/env_helper.py", "tests/ci/push_to_artifactory.py", "tests/ci/version_helper.py"] | Package listing for macOS | **Use case**
Working with a specific version in a MacOS environment (especially for devs)
**Describe the solution you'd like**
Currently, we have a list for a lot of build packages for Linux distributions, it would be nice to do the same thing for MacOS. Something like: https://packages.clickhouse.com/tgz/stable/ but for macOS
**Describe alternatives you've considered**
If I want to find a specific macOS build I need to go to the specific release, find the last commit, search for the build history, find the S3 bucket and then play around with the URL. It's not the best experience.
| https://github.com/ClickHouse/ClickHouse/issues/37718 | https://github.com/ClickHouse/ClickHouse/pull/41088 | 12e7cd6b19a1f0a446e67968a1aecbb10f1f1f87 | b815fdbc8c9716c4e3724ef02bcc98e4792407fb | "2022-05-31T19:36:57Z" | c++ | "2022-09-09T15:17:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,673 | ["src/Interpreters/InterpreterSelectQuery.cpp", "tests/queries/0_stateless/02302_projections_GROUP_BY_ORDERY_BY_optimize_aggregation_in_order.reference", "tests/queries/0_stateless/02302_projections_GROUP_BY_ORDERY_BY_optimize_aggregation_in_order.sql"] | ORDER BY gives unsorted results when there are duplicate rows in table and projection is used | **Describe what's wrong**
**Does it reproduce on recent release?**
Yes, it's happening in latest release of 19th May
**How to reproduce**
* Which ClickHouse server version to use: v22.5.1.2079-stable
* Non-default settings, if any: `SET allow_experimental_projection_optimization=1;`
* `CREATE TABLE` statements for all tables involved
```
CREATE TABLE IF NOT EXISTS signoz_traces.signoz_index_v2 (
timestamp DateTime64(9) CODEC(DoubleDelta, LZ4),
traceID FixedString(32) CODEC(ZSTD(1)),
spanID String CODEC(ZSTD(1)),
parentSpanID String CODEC(ZSTD(1)),
serviceName LowCardinality(String) CODEC(ZSTD(1)),
name LowCardinality(String) CODEC(ZSTD(1)),
kind Int8 CODEC(T64, ZSTD(1)),
durationNano UInt64 CODEC(T64, ZSTD(1)),
statusCode Int16 CODEC(T64, ZSTD(1)),
externalHttpMethod LowCardinality(String) CODEC(ZSTD(1)),
externalHttpUrl LowCardinality(String) CODEC(ZSTD(1)),
component LowCardinality(String) CODEC(ZSTD(1)),
dbSystem LowCardinality(String) CODEC(ZSTD(1)),
dbName LowCardinality(String) CODEC(ZSTD(1)),
dbOperation LowCardinality(String) CODEC(ZSTD(1)),
peerService LowCardinality(String) CODEC(ZSTD(1)),
events Array(String) CODEC(ZSTD(2)),
httpMethod LowCardinality(String) CODEC(ZSTD(1)),
httpUrl LowCardinality(String) CODEC(ZSTD(1)),
httpCode LowCardinality(String) CODEC(ZSTD(1)),
httpRoute LowCardinality(String) CODEC(ZSTD(1)),
httpHost LowCardinality(String) CODEC(ZSTD(1)),
msgSystem LowCardinality(String) CODEC(ZSTD(1)),
msgOperation LowCardinality(String) CODEC(ZSTD(1)),
hasError bool CODEC(T64, ZSTD(1)),
tagMap Map(LowCardinality(String), String) CODEC(ZSTD(1)),
gRPCMethod LowCardinality(String) CODEC(ZSTD(1)),
gRPCCode LowCardinality(String) CODEC(ZSTD(1)),
PROJECTION timestampSort (SELECT * ORDER BY timestamp),
INDEX idx_service serviceName TYPE bloom_filter GRANULARITY 4,
INDEX idx_name name TYPE bloom_filter GRANULARITY 4,
INDEX idx_kind kind TYPE minmax GRANULARITY 4,
INDEX idx_duration durationNano TYPE minmax GRANULARITY 1,
INDEX idx_httpCode httpCode TYPE set(0) GRANULARITY 1,
INDEX idx_hasError hasError TYPE set(2) GRANULARITY 1,
INDEX idx_tagMapKeys mapKeys(tagMap) TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX idx_tagMapValues mapValues(tagMap) TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX idx_httpRoute httpRoute TYPE bloom_filter GRANULARITY 4,
INDEX idx_httpUrl httpUrl TYPE bloom_filter GRANULARITY 4,
INDEX idx_httpHost httpHost TYPE bloom_filter GRANULARITY 4,
INDEX idx_httpMethod httpMethod TYPE bloom_filter GRANULARITY 4,
INDEX idx_timestamp timestamp TYPE minmax GRANULARITY 1
) ENGINE MergeTree()
PARTITION BY toDate(timestamp)
PRIMARY KEY (serviceName, hasError, toStartOfHour(timestamp), name)
ORDER BY (serviceName, hasError, toStartOfHour(timestamp), name, timestamp)
SETTINGS index_granularity = 8192;
```
* Sample data for all these tables, use [clickhouse-obfuscator]
Tabseparated data
```
timestamp spanID traceID serviceName name durationNano httpCode gRPCCode gRPCMethod httpMethod
DateTime64(9) String FixedString(32) LowCardinality(String) LowCardinality(String) UInt64 LowCardinality(String) LowCardinality(String) LowCardinality(String) LowCardinality(String)
2022-05-31 17:53:50.947806000 138b4e5f791a98a1 e5d9ad80ff09593792ed91829080ffc8goApp /books 3273333 200 GET
2022-05-31 17:53:50.947806000 138b4e5f791a98a1 e5d9ad80ff09593792ed91829080ffc8goApp /books 3273333 200 GET
2022-05-31 17:53:50.947920000 b8df70866457c1de e5d9ad80ff09593792ed91829080ffc8goApp gorm.Query 550458
2022-05-31 17:53:50.947920000 b8df70866457c1de e5d9ad80ff09593792ed91829080ffc8goApp gorm.Query 550458
2022-05-31 17:53:51.121918000 e642fa8ca4c76a9f 983577555041547a24f1ffa29aea887cgoApp HTTP GET route not found 10917 404 GET
2022-05-31 17:53:51.121918000 e642fa8ca4c76a9f 983577555041547a24f1ffa29aea887cgoApp HTTP GET route not found 10917 404 GET
2022-05-31 11:06:59.635748000 c49213a4a49fd0bf 84270251231caf828b24c3e07edaf78fgoApp /books/:id 32253250 400 GET
2022-05-31 11:06:59.635748000 c49213a4a49fd0bf 84270251231caf828b24c3e07edaf78fgoApp /books/:id 32253250 400 GET
2022-05-31 11:06:59.638262000 5bc90a538f105fd2 84270251231caf828b24c3e07edaf78fgoApp gorm.Query 11877708
2022-05-31 11:06:59.638262000 5bc90a538f105fd2 84270251231caf828b24c3e07edaf78fgoApp gorm.Query 11877708
2022-05-31 11:07:00.072359000 09de3156dfaad430 5679075bd4dc193dabe98b1211c97c23goApp HTTP GET route not found 11500 404 GET
2022-05-31 11:07:00.072359000 09de3156dfaad430 5679075bd4dc193dabe98b1211c97c23goApp HTTP GET route not found 11500 404 GET
2022-05-31 11:07:40.285432000 e7ccad4fcbb6d42f 4f8dfcd16dcffa19242c401a445e12a0goApp /books 10019125 200 GET
2022-05-31 11:07:40.285432000 e7ccad4fcbb6d42f 4f8dfcd16dcffa19242c401a445e12a0goApp /books 10019125 200 GET
2022-05-31 11:07:40.285521000 18d39d06881d0856 4f8dfcd16dcffa19242c401a445e12a0goApp gorm.Query 2040208
2022-05-31 11:07:40.285521000 18d39d06881d0856 4f8dfcd16dcffa19242c401a445e12a0goApp gorm.Query 2040208
2022-05-31 11:07:40.480975000 dd9cec91fc5da1d4 beb68c69aafa89eb0fad3cfbd333ba17goApp HTTP GET route not found 8167 404 GET
2022-05-31 11:07:40.480975000 dd9cec91fc5da1d4 beb68c69aafa89eb0fad3cfbd333ba17goApp HTTP GET route not found 8167 404 GET
2022-05-31 11:07:43.999741000 1ac50b417268d0b1 030b881f77630915e08ac4d20632c557goApp HTTP GET route not found 11500 404 GET
2022-05-31 11:07:43.999741000 1ac50b417268d0b1 030b881f77630915e08ac4d20632c557goApp HTTP GET route not found 11500 404 GET
2022-05-31 11:07:44.159637000 18a71371c304bfa3 e84c6b1ce677325a9d9af931e79fd14egoApp HTTP GET route not found 16000 404 GET
2022-05-31 11:07:44.159637000 18a71371c304bfa3 e84c6b1ce677325a9d9af931e79fd14egoApp HTTP GET route not found 16000 404 GET
2022-05-31 17:53:42.264991000 65de937350431b80 65ad07f99eb2c54ffea397e4d0a24b4cgoApp /books 64672584 200 GET
2022-05-31 17:53:42.264991000 65de937350431b80 65ad07f99eb2c54ffea397e4d0a24b4cgoApp /books 64672584 200 GET
2022-05-31 17:53:42.265070000 4a0c5b2d6bd6b07a 65ad07f99eb2c54ffea397e4d0a24b4cgoApp gorm.Query 519917
2022-05-31 17:53:42.265070000 4a0c5b2d6bd6b07a 65ad07f99eb2c54ffea397e4d0a24b4cgoApp gorm.Query 519917
2022-05-31 17:53:42.579637000 7718219790279e80 20a5404c009e5e3f889204da7ae1e716goApp HTTP GET route not found 8083 404 GET
2022-05-31 17:53:42.579637000 7718219790279e80 20a5404c009e5e3f889204da7ae1e716goApp HTTP GET route not found 8083 404 GET
```
* Queries to run that lead to unexpected result
```
SELECT timestamp, spanID, traceID, serviceName, name, durationNano, httpCode, gRPCCode, gRPCMethod, httpMethod FROM signoz_traces.signoz_index_v2 WHERE timestamp >= '1553562083201000000' AND timestamp <= '1753563883201000000' ORDER BY timestamp ASC LIMIT 10 FORMAT TabSeparatedRawWithNamesAndTypes
```
**Expected behavior**
The data should be sorted in ascending order by timestamp and running this query multiple times should return consistent results of order. | https://github.com/ClickHouse/ClickHouse/issues/37673 | https://github.com/ClickHouse/ClickHouse/pull/37342 | ce48e8e1028ce394eeaf32d4e0240d38b0467932 | 87d445295f215e27b694dfa835d52f01328fb469 | "2022-05-31T05:55:49Z" | c++ | "2022-05-23T10:29:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,662 | ["src/Parsers/ExpressionElementParsers.cpp", "tests/queries/0_stateless/00031_parser_number.reference", "tests/queries/0_stateless/00948_values_interpreter_template.reference", "tests/queries/0_stateless/02316_literal_no_octal.reference", "tests/queries/0_stateless/02316_literal_no_octal.sql"] | Error in parsing ENUM Literal Values. |
**Describe the unexpected behaviour**
Clickhouse parses enum object unexpectedly when enum object literal value is 08 or 09. I tried to debug the problem and it seems that Which::type is returning Float64 and hence ENUMparser is throwing error.
**Notice dot symbol after 8 literal value in generated SQL . This is not happening for values 07,01,03... so on . This is failing for only ( = 08 and =09)**
**How to reproduce**
````
:) CREATE TABLE t1 (`Road_Surface_Condition` Enum8('00 - Unkown' = 0,'07 - Others' = 07,'08 - Loose sand and gravel' = 8, '09 - Spilled liquid' = 9),objectId UInt32)Engine=MergeTree ORDER BY objectId
CREATE TABLE t1
(
`Road_Surface_Condition` Enum8('00 - Unkown' = 0, '07 - Others' = 7, '08 - Loose sand and gravel' = 8, '09 - Spilled liquid' = 9),
`objectId` UInt32
)
ENGINE = MergeTree
ORDER BY objectId
Query id: fc946666-baeb-49af-bda2-72df3fd5d6ea
Ok.
0 rows in set. Elapsed: 0.028 sec.
:) CREATE TABLE t1 (`Road_Surface_Condition` Enum8('00 - Unkown' = 0,'07 - Others' = 07),objectId UInt32)Engine=MergeTree ORDER BY objectId
CREATE TABLE t1
(
`Road_Surface_Condition` Enum8('00 - Unkown' = 0, '07 - Others' = 7),
`objectId` UInt32
)
ENGINE = MergeTree
ORDER BY objectId
Query id: ee00d448-d038-4876-85c3-a607b0a85d4c
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 22.3.6):
Code: 57. DB::Exception: Received from localhost:9000. DB::Exception: Table default.t1 already exists. (TABLE_ALREADY_EXISTS)
:) CREATE TABLE t2 (`Road_Surface_Condition` Enum8('00 - Unkown' = 0,'07 - Others' = 07),objectId UInt32)Engine=MergeTree ORDER BY objectId
CREATE TABLE t2
(
`Road_Surface_Condition` Enum8('00 - Unkown' = 0, '07 - Others' = 7),
`objectId` UInt32
)
ENGINE = MergeTree
ORDER BY objectId
Query id: f6eb283f-0480-43ad-8fc0-08925a6a13e5
Ok.
0 rows in set. Elapsed: 0.027 sec.
:) `
```
* Which ClickHouse server version to use
v22.6
* Which interface to use, if matters
command line client.
**Expected behavior**
Table should be created successfully with specified Enum column .
**Error message and/or stacktrace**
```
Received exception from server (version 22.3.6):
Code: 223. DB::exception: Received from localhost:9000. DB::exception: Elements of Enum data type must be of form: 'name' = number, where name is string literal and number is an integer. (UNEXPECTED_AST_STRUCTURE)
```
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/37662 | https://github.com/ClickHouse/ClickHouse/pull/37765 | ef6f5a6500954fb48b506254960d505fcbc54d70 | 17fbf49d137f28c32a98db8539c98b7b071e5dd3 | "2022-05-30T18:24:59Z" | c++ | "2022-06-07T11:24:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,651 | ["src/Core/SortDescription.h", "src/Interpreters/InterpreterSelectQuery.cpp", "src/Processors/Transforms/FillingTransform.cpp", "tests/queries/0_stateless/02366_with_fill_date.reference", "tests/queries/0_stateless/02366_with_fill_date.sql"] | Timeout exception for query with ORDER BY WITH FILL | Hi there, it seems that the combination of `ORDER BY WITH FILL`, `FROM TO` and `STEP INTERVAL` is causing the query to hang and eventually to timeout (ignoring session timeout limit).
Sometimes the server hangs after few such queries and don't respond to KILL signal (had to restart the VM).
**How to reproduce**
```sql
SELECT
count() AS value,
toStartOfMonth(d1) AS date
FROM (
SELECT
timestamp_add(toDateTime('2022-02-01'), INTERVAL number WEEK) AS d1
FROM numbers(18) AS number
)
GROUP BY date
ORDER BY
date
WITH FILL
FROM toDateTime('2022-02-01')
TO toDateTime('2022-06-01')
STEP INTERVAL 1 MONTH
```
**Error message and/or stacktrace**
```
2022.05.30 12:24:11.784134 [ 45 ] {c3091464-f1ce-4657-8498-ba67743dac1c} <Error> executeQuery: Code: 159. DB::Exception: Timeout exceeded: elapsed 38.329806547 seconds, maximum: 5. (TIMEOUT_EXCEEDED) (version 22.5.1.2079 (official build)) (from 172.17.0.1:61118) (in query: SELECT count() AS value, toStartOfMonth(d1) AS date FROM ( SELECT timestamp_add(toDateTime('2022-02-01'), INTERVAL number WEEK) AS d1 FROM numbers(18) AS number ) GROUP BY date ORDER BY date WITH FILL FROM toDateTime('2022-02-01') TO toDateTime('2022-06-01') STEP INTERVAL 1 MONTH FORMAT JSON ), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb4903fa in /usr/bin/clickhouse
1. DB::Exception::Exception<double, double>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<double>::type, fmt::v8::type_identity<double>::type>, double&&, double&&) @ 0x15e7ebec in /usr/bin/clickhouse
2. DB::ExecutionSpeedLimits::checkTimeLimit(Stopwatch const&, DB::OverflowMode) const @ 0x15e7eacc in /usr/bin/clickhouse
3. DB::PipelineExecutor::finalizeExecution() @ 0x16d0c2a2 in /usr/bin/clickhouse
4. DB::CompletedPipelineExecutor::execute() @ 0x16d0a204 in /usr/bin/clickhouse
5. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x160dbc33 in /usr/bin/clickhouse
6. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x16a42871 in /usr/bin/clickhouse
7. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x16a46f22 in /usr/bin/clickhouse
8. DB::HTTPServerConnection::run() @ 0x16ccddf3 in /usr/bin/clickhouse
9. Poco::Net::TCPServerConnection::start() @ 0x1b0bbc2f in /usr/bin/clickhouse
10. Poco::Net::TCPServerDispatcher::run() @ 0x1b0be081 in /usr/bin/clickhouse
11. Poco::PooledThread::run() @ 0x1b284169 in /usr/bin/clickhouse
12. Poco::ThreadImpl::runnableEntry(void*) @ 0x1b2814c0 in /usr/bin/clickhouse
13. ? @ 0x7f4927458609 in ?
14. __clone @ 0x7f492737d133 in ?
2022.05.30 12:24:11.784437 [ 45 ] {c3091464-f1ce-4657-8498-ba67743dac1c} <Error> DynamicQueryHandler: Code: 159. DB::Exception: Timeout exceeded: elapsed 38.329806547 seconds, maximum: 5. (TIMEOUT_EXCEEDED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xb4903fa in /usr/bin/clickhouse
1. DB::Exception::Exception<double, double>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<double>::type, fmt::v8::type_identity<double>::type>, double&&, double&&) @ 0x15e7ebec in /usr/bin/clickhouse
2. DB::ExecutionSpeedLimits::checkTimeLimit(Stopwatch const&, DB::OverflowMode) const @ 0x15e7eacc in /usr/bin/clickhouse
3. DB::PipelineExecutor::finalizeExecution() @ 0x16d0c2a2 in /usr/bin/clickhouse
4. DB::CompletedPipelineExecutor::execute() @ 0x16d0a204 in /usr/bin/clickhouse
5. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x160dbc33 in /usr/bin/clickhouse
6. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x16a42871 in /usr/bin/clickhouse
7. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x16a46f22 in /usr/bin/clickhouse
8. DB::HTTPServerConnection::run() @ 0x16ccddf3 in /usr/bin/clickhouse
9. Poco::Net::TCPServerConnection::start() @ 0x1b0bbc2f in /usr/bin/clickhouse
10. Poco::Net::TCPServerDispatcher::run() @ 0x1b0be081 in /usr/bin/clickhouse
11. Poco::PooledThread::run() @ 0x1b284169 in /usr/bin/clickhouse
12. Poco::ThreadImpl::runnableEntry(void*) @ 0x1b2814c0 in /usr/bin/clickhouse
13. ? @ 0x7f4927458609 in ?
14. __clone @ 0x7f492737d133 in ?
(version 22.5.1.2079 (official build))
```
**Additional context**
I'm running latest stable version (22.5.1.2079) as a docker container on my laptop with default settings. but was able to reproduce it in play.clickhouse.com as well (sorry if it crashed anything π) | https://github.com/ClickHouse/ClickHouse/issues/37651 | https://github.com/ClickHouse/ClickHouse/pull/37849 | 44463cfca0af49bd50a6a1b994fe7fcac024f959 | 873432fb536c479b525ae30784a9e20357424eaf | "2022-05-30T13:14:49Z" | c++ | "2022-07-27T10:27:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,586 | ["src/Core/Settings.h", "src/Interpreters/OptimizeIfChains.h", "src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/01355_if_fixed_string.sql", "tests/queries/0_stateless/02315_replace_multiif_to_if.reference", "tests/queries/0_stateless/02315_replace_multiif_to_if.sql"] | Replace `multiIf` to `if` in case of single branch. | **Describe the situation**
```
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT yr,
mo,
SUM(coffee_hourly_avg) AS coffee_monthly_sum,
AVG(coffee_hourly_avg) AS coffee_monthly_avg,
SUM(printer_hourly_avg) AS printer_monthly_sum,
AVG(printer_hourly_avg) AS printer_monthly_avg,
SUM(projector_hourly_avg) AS projector_monthly_sum,
AVG(projector_hourly_avg) AS projector_monthly_avg,
SUM(vending_hourly_avg) AS vending_monthly_sum,
AVG(vending_hourly_avg) AS vending_monthly_avg
FROM (
SELECT dt,
yr,
mo,
hr,
AVG(coffee) AS coffee_hourly_avg,
AVG(printer) AS printer_hourly_avg,
AVG(projector) AS projector_hourly_avg,
AVG(vending) AS vending_hourly_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(YEAR FROM log_time) AS yr,
EXTRACT(MONTH FROM log_time) AS mo,
EXTRACT(HOUR FROM log_time) AS hr,
CASE WHEN device_name LIKE 'coffee%' THEN event_value END AS coffee,
CASE WHEN device_name LIKE 'printer%' THEN event_value END AS printer,
CASE WHEN device_name LIKE 'projector%' THEN event_value END AS projector,
CASE WHEN device_name LIKE 'vending%' THEN event_value END AS vending
FROM logs3
WHERE device_type = 'meter'
) AS r
GROUP BY dt,
yr,
mo,
hr
) AS s
GROUP BY yr,
mo
ORDER BY yr,
mo;"
```
435 ms.
```
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT yr,
mo,
SUM(coffee_hourly_avg) AS coffee_monthly_sum,
AVG(coffee_hourly_avg) AS coffee_monthly_avg,
SUM(printer_hourly_avg) AS printer_monthly_sum,
AVG(printer_hourly_avg) AS printer_monthly_avg,
SUM(projector_hourly_avg) AS projector_monthly_sum,
AVG(projector_hourly_avg) AS projector_monthly_avg,
SUM(vending_hourly_avg) AS vending_monthly_sum,
AVG(vending_hourly_avg) AS vending_monthly_avg
FROM (
SELECT dt,
yr,
mo,
hr,
AVG(coffee) AS coffee_hourly_avg,
AVG(printer) AS printer_hourly_avg,
AVG(projector) AS projector_hourly_avg,
AVG(vending) AS vending_hourly_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(YEAR FROM log_time) AS yr,
EXTRACT(MONTH FROM log_time) AS mo,
EXTRACT(HOUR FROM log_time) AS hr,
device_name LIKE 'coffee%' ? event_value : NULL AS coffee,
device_name LIKE 'printer%' ? event_value : NULL AS printer,
device_name LIKE 'projector%' ? event_value : NULL AS projector,
device_name LIKE 'vending%' ? event_value : NULL AS vending
FROM logs3
WHERE device_type = 'meter'
) AS r
GROUP BY dt,
yr,
mo,
hr
) AS s
GROUP BY yr,
mo
ORDER BY yr,
mo;
"
```
303 ms.
1.44 times faster.
**How to reproduce**
```
wget https://clickhouse-datasets.s3.yandex.net/mgbench{1..3}.csv.xz
curl https://clickhouse.com/ | sh
sudo ./clickhouse install
sudo clickhouse start
clickhouse-client -n --query "
CREATE DATABASE mgbench;
CREATE TABLE mgbench.logs1 (
log_time DateTime,
machine_name LowCardinality(String),
machine_group LowCardinality(String),
cpu_idle Nullable(Float32),
cpu_nice Nullable(Float32),
cpu_system Nullable(Float32),
cpu_user Nullable(Float32),
cpu_wio Nullable(Float32),
disk_free Nullable(Float32),
disk_total Nullable(Float32),
part_max_used Nullable(Float32),
load_fifteen Nullable(Float32),
load_five Nullable(Float32),
load_one Nullable(Float32),
mem_buffers Nullable(Float32),
mem_cached Nullable(Float32),
mem_free Nullable(Float32),
mem_shared Nullable(Float32),
swap_free Nullable(Float32),
bytes_in Nullable(Float32),
bytes_out Nullable(Float32)
)
ENGINE = Memory;
CREATE TABLE mgbench.logs2 (
log_time DateTime,
client_ip IPv4,
request String,
status_code UInt16,
object_size UInt64
)
ENGINE = Memory;
CREATE TABLE mgbench.logs3 (
log_time DateTime64,
device_id FixedString(15),
device_name LowCardinality(String),
device_type LowCardinality(String),
device_floor UInt8,
event_type LowCardinality(String),
event_unit FixedString(1),
event_value Nullable(Float32)
)
ENGINE = Memory;
"
xz -d mgbench*.csv.xz
time clickhouse-client --query "INSERT INTO mgbench.logs1 FORMAT CSVWithNames" < mgbench1.csv
time clickhouse-client --query "INSERT INTO mgbench.logs2 FORMAT CSVWithNames" < mgbench2.csv
time clickhouse-client --query "INSERT INTO mgbench.logs3 FORMAT CSVWithNames" < mgbench3.csv
echo "Q1.1: What is the CPU/network utilization for each web server since midnight?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT machine_name,
MIN(cpu) AS cpu_min,
MAX(cpu) AS cpu_max,
AVG(cpu) AS cpu_avg,
MIN(net_in) AS net_in_min,
MAX(net_in) AS net_in_max,
AVG(net_in) AS net_in_avg,
MIN(net_out) AS net_out_min,
MAX(net_out) AS net_out_max,
AVG(net_out) AS net_out_avg
FROM (
SELECT machine_name,
COALESCE(cpu_user, 0.0) AS cpu,
COALESCE(bytes_in, 0.0) AS net_in,
COALESCE(bytes_out, 0.0) AS net_out
FROM logs1
WHERE machine_name IN ('anansi','aragog','urd')
AND log_time >= TIMESTAMP '2017-01-11 00:00:00'
) AS r
GROUP BY machine_name;"
echo "Q1.2: Which computer lab machines have been offline in the past day?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT machine_name,
log_time
FROM logs1
WHERE (machine_name LIKE 'cslab%' OR
machine_name LIKE 'mslab%')
AND load_one IS NULL
AND log_time >= TIMESTAMP '2017-01-10 00:00:00'
ORDER BY machine_name,
log_time;"
echo "Q1.3: What are the hourly average metrics during the past 10 days for a specific workstation?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT dt,
hr,
AVG(load_fifteen) AS load_fifteen_avg,
AVG(load_five) AS load_five_avg,
AVG(load_one) AS load_one_avg,
AVG(mem_free) AS mem_free_avg,
AVG(swap_free) AS swap_free_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(HOUR FROM log_time) AS hr,
load_fifteen,
load_five,
load_one,
mem_free,
swap_free
FROM logs1
WHERE machine_name = 'babbage'
AND load_fifteen IS NOT NULL
AND load_five IS NOT NULL
AND load_one IS NOT NULL
AND mem_free IS NOT NULL
AND swap_free IS NOT NULL
AND log_time >= TIMESTAMP '2017-01-01 00:00:00'
) AS r
GROUP BY dt,
hr
ORDER BY dt,
hr;"
echo "Q1.4: Over a 1-month period, how often was each server blocked on disk I/O?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT machine_name,
COUNT(*) AS spikes
FROM logs1
WHERE machine_group = 'Servers'
AND cpu_wio > 0.99
AND log_time >= TIMESTAMP '2016-12-01 00:00:00'
AND log_time < TIMESTAMP '2017-01-01 00:00:00'
GROUP BY machine_name
ORDER BY spikes DESC
LIMIT 10;"
echo "Q1.5: Which externally reachable VMs have run low on memory?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT machine_name,
dt,
MIN(mem_free) AS mem_free_min
FROM (
SELECT machine_name,
CAST(log_time AS DATE) AS dt,
mem_free
FROM logs1
WHERE machine_group = 'DMZ'
AND mem_free IS NOT NULL
) AS r
GROUP BY machine_name,
dt
HAVING MIN(mem_free) < 10000
ORDER BY machine_name,
dt;"
echo "Q1.6: What is the total hourly network traffic across all file servers?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT dt,
hr,
SUM(net_in) AS net_in_sum,
SUM(net_out) AS net_out_sum,
SUM(net_in) + SUM(net_out) AS both_sum
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(HOUR FROM log_time) AS hr,
COALESCE(bytes_in, 0.0) / 1000000000.0 AS net_in,
COALESCE(bytes_out, 0.0) / 1000000000.0 AS net_out
FROM logs1
WHERE machine_name IN ('allsorts','andes','bigred','blackjack','bonbon',
'cadbury','chiclets','cotton','crows','dove','fireball','hearts','huey',
'lindt','milkduds','milkyway','mnm','necco','nerds','orbit','peeps',
'poprocks','razzles','runts','smarties','smuggler','spree','stride',
'tootsie','trident','wrigley','york')
) AS r
GROUP BY dt,
hr
ORDER BY both_sum DESC
LIMIT 10;"
echo "Q2.1: Which requests have caused server errors within the past 2 weeks?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT *
FROM logs2
WHERE status_code >= 500
AND log_time >= TIMESTAMP '2012-12-18 00:00:00'
ORDER BY log_time;"
echo "Q2.2: During a specific 2-week period, was the user password file leaked?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT *
FROM logs2
WHERE status_code >= 200
AND status_code < 300
AND request LIKE '%/etc/passwd%'
AND log_time >= TIMESTAMP '2012-05-06 00:00:00'
AND log_time < TIMESTAMP '2012-05-20 00:00:00';"
echo "Q2.3: What was the average path depth for top-level requests in the past month?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT top_level,
AVG(LENGTH(request) - LENGTH(REPLACE(request, '/', ''))) AS depth_avg
FROM (
SELECT SUBSTRING(request FROM 1 FOR len) AS top_level,
request
FROM (
SELECT POSITION(SUBSTRING(request FROM 2), '/') AS len,
request
FROM logs2
WHERE status_code >= 200
AND status_code < 300
AND log_time >= TIMESTAMP '2012-12-01 00:00:00'
) AS r
WHERE len > 0
) AS s
WHERE top_level IN ('/about','/courses','/degrees','/events',
'/grad','/industry','/news','/people',
'/publications','/research','/teaching','/ugrad')
GROUP BY top_level
ORDER BY top_level;"
echo "Q2.4: During the last 3 months, which clients have made an excessive number of requests?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT client_ip,
COUNT(*) AS num_requests
FROM logs2
WHERE log_time >= TIMESTAMP '2012-10-01 00:00:00'
GROUP BY client_ip
HAVING COUNT(*) >= 100000
ORDER BY num_requests DESC;"
echo "Q2.5: What are the daily unique visitors?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT dt,
COUNT(DISTINCT client_ip)
FROM (
SELECT CAST(log_time AS DATE) AS dt,
client_ip
FROM logs2
) AS r
GROUP BY dt
ORDER BY dt;"
echo "Q2.6: What are the average and maximum data transfer rates (Gbps)?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT AVG(transfer) / 125000000.0 AS transfer_avg,
MAX(transfer) / 125000000.0 AS transfer_max
FROM (
SELECT log_time,
SUM(object_size) AS transfer
FROM logs2
GROUP BY log_time
) AS r;"
echo "Q3.1: Did the indoor temperature reach freezing over the weekend?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT *
FROM logs3
WHERE event_type = 'temperature'
AND event_value <= 32.0
AND log_time >= '2019-11-29 17:00:00.000';"
echo "Q3.4: Over the past 6 months, how frequently was each door opened?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
SELECT device_name,
device_floor,
COUNT(*) AS ct
FROM logs3
WHERE event_type = 'door_open'
AND log_time >= '2019-06-01 00:00:00.000'
GROUP BY device_name,
device_floor
ORDER BY ct DESC;"
echo "Q3.5: Where in the building do large temperature variations occur in winter and summer?"
clickhouse-benchmark --max_threads 32 --iterations 10 --database mgbench --enable_global_with_statement 1 --query "
WITH temperature AS (
SELECT dt,
device_name,
device_type,
device_floor
FROM (
SELECT dt,
hr,
device_name,
device_type,
device_floor,
AVG(event_value) AS temperature_hourly_avg
FROM (
SELECT CAST(log_time AS DATE) AS dt,
EXTRACT(HOUR FROM log_time) AS hr,
device_name,
device_type,
device_floor,
event_value
FROM logs3
WHERE event_type = 'temperature'
) AS r
GROUP BY dt,
hr,
device_name,
device_type,
device_floor
) AS s
GROUP BY dt,
device_name,
device_type,
device_floor
HAVING MAX(temperature_hourly_avg) - MIN(temperature_hourly_avg) >= 25.0
)
SELECT DISTINCT device_name,
device_type,
device_floor,
'WINTER'
FROM temperature
WHERE dt >= DATE '2018-12-01'
AND dt < DATE '2019-03-01'
UNION DISTINCT
SELECT DISTINCT device_name,
device_type,
device_floor,
'SUMMER'
FROM temperature
WHERE dt >= DATE '2019-06-01'
AND dt < DATE '2019-09-01';"
``` | https://github.com/ClickHouse/ClickHouse/issues/37586 | https://github.com/ClickHouse/ClickHouse/pull/37695 | c6574b15bc7f7323d9a3adf30b13c63d3b6de227 | 3ace07740171eae90c832173bba57203a8f7f192 | "2022-05-27T05:01:47Z" | c++ | "2022-06-03T12:56:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,569 | ["src/Interpreters/InterpreterSelectWithUnionQuery.cpp", "src/Interpreters/SelectQueryOptions.h", "src/Interpreters/TreeRewriter.cpp", "tests/queries/0_stateless/02227_union_match_by_name.reference", "tests/queries/0_stateless/02227_union_match_by_name.sql"] | Null pointer dereference in Aggregator | ```
<Fatal> BaseDaemon: ########################################
<Fatal> BaseDaemon: (version 22.5.1.2079 (official build), build id: EBBF91C52186EE12) (from thread 46419) (query_id: 51268d18-4093-4de4-b3fa-18f3e7ff3ea9) (query: SELECT avgWeighted(x, y) FROM (SELECT NULL, 255 AS x, 1 AS y UNION ALL SELECT y, NULL AS x, 1 AS y);) Received signal Segmentation fault (11)
<Fatal> BaseDaemon: ########################################
<Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
<Fatal> BaseDaemon: Stack trace: 0xd99865d 0xd998c25 0x155c4d5f 0x155c43f5 0x16ebb800 0x16eb91f6 0x16d18ee8 0x16d0cbfe 0x16d0e4c4 0xb53fc47 0xb54367d 0x7fb28832e609 0x7fb288253163
<Fatal> BaseDaemon: 2. DB::AggregateFunctionNullVariadic<true, true, true>::add(char*, DB::IColumn const**, unsigned long, DB::Arena*) const @ 0xd99865d in /usr/bin/clickhouse
<Fatal> BaseDaemon: 3. DB::IAggregateFunctionHelper<DB::AggregateFunctionNullVariadic<true, true, true> >::addBatchSinglePlace(unsigned long, unsigned long, char*, DB::IColumn const**, DB::Arena*, long) const @ 0xd998c25 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 4. void DB::Aggregator::executeWithoutKeyImpl<false>(char*&, unsigned long, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, DB::Arena*) const @ 0x155c4d5f in /usr/bin/clickhouse
<Fatal> BaseDaemon: 5. DB::Aggregator::executeOnBlock(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> >&, std::__1::vector<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> >, std::__1::allocator<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > > >&, bool&) const @ 0x155c43f5 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 6. DB::AggregatingTransform::consume(DB::Chunk) @ 0x16ebb800 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 7. DB::AggregatingTransform::work() @ 0x16eb91f6 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x16d18ee8 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x16d0cbfe in /usr/bin/clickhouse
<Fatal> BaseDaemon: 10. ? @ 0x16d0e4c4 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb53fc47 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 12. ? @ 0xb54367d in /usr/bin/clickhouse
<Fatal> BaseDaemon: (version 22.5.1.2079 (official build), build id: EBBF91C52186EE12) (from thread 43826) (query_id: 51268d18-4093-4de4-b3fa-18f3e7ff3ea9) (query: SELECT avgWeighted(x, y) FROM (SELECT NULL, 255 AS x, 1 AS y UNION ALL SELECT y, NULL AS x, 1 AS y);) Received signal Segmentation fault (11)
<Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
<Fatal> BaseDaemon: 13. ? @ 0x7fb28832e609 in ?
<Fatal> BaseDaemon: Stack trace: 0xd99865d 0xd998c25 0x155c4d5f 0x155c43f5 0x16ebb800 0x16eb91f6 0x16d18ee8 0x16d0cbfe 0x16d0e4c4 0xb53fc47 0xb54367d 0x7fb28832e609 0x7fb288253163
<Fatal> BaseDaemon: 14. clone @ 0x7fb288253163 in ?
<Fatal> BaseDaemon: 2. DB::AggregateFunctionNullVariadic<true, true, true>::add(char*, DB::IColumn const**, unsigned long, DB::Arena*) const @ 0xd99865d in /usr/bin/clickhouse
<Fatal> BaseDaemon: 3. DB::IAggregateFunctionHelper<DB::AggregateFunctionNullVariadic<true, true, true> >::addBatchSinglePlace(unsigned long, unsigned long, char*, DB::IColumn const**, DB::Arena*, long) const @ 0xd998c25 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 4. void DB::Aggregator::executeWithoutKeyImpl<false>(char*&, unsigned long, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, DB::Arena*) const @ 0x155c4d5f in /usr/bin/clickhouse
<Fatal> BaseDaemon: 5. DB::Aggregator::executeOnBlock(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> >&, std::__1::vector<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> >, std::__1::allocator<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > > >&, bool&) const @ 0x155c43f5 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 6. DB::AggregatingTransform::consume(DB::Chunk) @ 0x16ebb800 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 7. DB::AggregatingTransform::work() @ 0x16eb91f6 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 8. DB::ExecutionThreadContext::executeTask() @ 0x16d18ee8 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x16d0cbfe in /usr/bin/clickhouse
<Fatal> BaseDaemon: 10. ? @ 0x16d0e4c4 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb53fc47 in /usr/bin/clickhouse
<Fatal> BaseDaemon: 12. ? @ 0xb54367d in /usr/bin/clickhouse
<Fatal> BaseDaemon: 13. ? @ 0x7fb28832e609 in ?
<Fatal> BaseDaemon: 14. clone @ 0x7fb288253163 in ?
<Fatal> BaseDaemon: Calculated checksum of the binary: 725445B106D580F2BC17459FE7679BDC. There is no information about the reference checksum.
<Fatal> BaseDaemon: Calculated checksum of the binary: 725445B106D580F2BC17459FE7679BDC. There is no information about the reference checksum.
<Fatal> Application: Child process was terminated by signal 11.
```
https://s3.amazonaws.com/clickhouse-test-reports/34775/a813f5996e95e424193265bb090ef7a402497d6e/stress_test__undefined__actions_.html
https://s3.amazonaws.com/clickhouse-test-reports/34775/a813f5996e95e424193265bb090ef7a402497d6e/stress_test__debug__actions_.html
Might be related to #35111 cc @azat | https://github.com/ClickHouse/ClickHouse/issues/37569 | https://github.com/ClickHouse/ClickHouse/pull/37593 | 473b0bd0db0e4e26ecdbf028770a920f234b571f | f58623a3755b50f83ccc1cfb946a5d8ba7ca7218 | "2022-05-26T15:26:26Z" | c++ | "2022-05-31T13:27:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,549 | ["tests/ci/build_check.py", "tests/ci/ci_config.py"] | Build in CI can be interrupted but results uploaded to S3 | Build in CI can be interrupted but results uploaded to S3 - it prevents further rerun until new commit.
Example:
https://github.com/ClickHouse/ClickHouse/actions/runs/2378796282/attempts/2
Discussion (slack):
https://clickhouse-inc.slack.com/archives/C02N5V7EU57/p1653502032025939
| https://github.com/ClickHouse/ClickHouse/issues/37549 | https://github.com/ClickHouse/ClickHouse/pull/38086 | c83594284dc30c55568bbc0cf1b60bcb7f6be995 | e1547058cf6271ec5f97a93876f3b4209207134d | "2022-05-25T20:53:28Z" | c++ | "2022-06-20T09:49:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,531 | ["src/Functions/FunctionsRound.h", "tests/queries/0_stateless/00700_decimal_round.reference", "tests/queries/0_stateless/00700_decimal_round.sql"] | incorrect floor / ceiling / round with Decimal128/256 | ```sql
SELECT
floor(toDecimal64(1232.123132, 4) AS D64),
ceiling(D64),
round(D64),
roundBankers(D64),
floor(toDecimal128(1232.123132, 24) AS D128),
ceiling(D128),
round(D128),
roundBankers(D128),
floor(toDecimal256(1232.123132, 24) AS D256),
ceiling(D256),
round(D256),
roundBankers(D256)
FORMAT Vertical
Query id: 04db8c13-4888-47b6-908b-ba1268138f26
Row 1:
ββββββ
floor(toDecimal64(1232.123132, 4)): 1232
ceil(toDecimal64(1232.123132, 4)): 1233
round(toDecimal64(1232.123132, 4)): 1232
roundBankers(toDecimal64(1232.123132, 4)): 1232
floor(toDecimal128(1232.123132, 24)): 1232.12311679615299966394772
ceil(toDecimal128(1232.123132, 24)): 1232.123135242897073373499335
round(toDecimal128(1232.123132, 24)): 1232.123135242897073373499335
roundBankers(toDecimal128(1232.123132, 24)): 1232.123135242897073373499335
floor(toDecimal256(1232.123132, 24)): 1232.12311679615299966394772
ceil(toDecimal256(1232.123132, 24)): 1232.123135242897073373499335
round(toDecimal256(1232.123132, 24)): 1232.123135242897073373499335
roundBankers(toDecimal256(1232.123132, 24)): 1232.123135242897073373499335
```
| https://github.com/ClickHouse/ClickHouse/issues/37531 | https://github.com/ClickHouse/ClickHouse/pull/38027 | 59d32f8c96f169c9d20a3a7984b7cae29efc4750 | baebbc084f059cd4dfa148c3aa40b2286b042059 | "2022-05-25T14:17:36Z" | c++ | "2022-06-17T07:48:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,514 | ["src/Processors/Transforms/FillingTransform.cpp", "tests/queries/0_stateless/02112_with_fill_interval.reference", "tests/queries/0_stateless/02112_with_fill_interval.sql"] | The WITH FILL modifier can not be applied to Date in descending order | **Describe the unexpected behaviour**
The `WITH FILL` modifier on an `ORDER BY` clause does not accept a negative
`INTERVAL` as a `STEP`, when the `ORDER BY` is applied to a `Date` column
and the order is `DESC`. This is unexpected because negative steps work
fine on numeric columns sorted in descending order.
**How to reproduce**
* Which ClickHouse server version to use
22.4.4.3
* Queries to run that lead to unexpected result
```sql
SELECT d
FROM
(
SELECT toDate(1) AS d
)
ORDER BY d DESC WITH FILL FROM toDate(3) TO toDate(0) STEP INTERVAL -1 DAY
;
```
```
Received exception from server (version 22.4.4):
Code: 475. DB::Exception: Received from localhost:9000. DB::Exception: Value of step is to low (-86400 seconds). Must be >= 1 day. (INVALID_WITH_FILL_EXPRESSION)
```
**Expected behavior**
Like numeric columns, `Date` should also support negative steps.
```sql
SELECT n
FROM
(
SELECT 1 AS n
)
ORDER BY n DESC WITH FILL FROM 3 TO 0 STEP -1
ββnββ
β 3 β
β 2 β
β 1 β
βββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/37514 | https://github.com/ClickHouse/ClickHouse/pull/37600 | 6db44f633f70d0647a9c6aaa0e1d7a54a6b04835 | 52d3791eb9ab187dca15f6737e67ff84eec5b210 | "2022-05-25T05:46:44Z" | c++ | "2022-05-30T17:43:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,434 | ["src/Coordination/KeeperServer.cpp", "tests/integration/test_keeper_force_recovery_single_node/__init__.py", "tests/integration/test_keeper_force_recovery_single_node/configs/enable_keeper1.xml", "tests/integration/test_keeper_force_recovery_single_node/configs/enable_keeper1_solo.xml", "tests/integration/test_keeper_force_recovery_single_node/configs/enable_keeper2.xml", "tests/integration/test_keeper_force_recovery_single_node/configs/enable_keeper3.xml", "tests/integration/test_keeper_force_recovery_single_node/configs/use_keeper.xml", "tests/integration/test_keeper_force_recovery_single_node/test.py"] | clickhouse-keeper --force-recovery useless after scale down keeper cluster in Kubernetes | @antonio2368 , after https://github.com/ClickHouse/ClickHouse/pull/36258 and https://github.com/ClickHouse/ClickHouse/issues/35465
I tried to test your --force-recovery implementation and failed, totally
when we try to scale down zookeeper cluster from 3 nodes to 1 nodes we need only one keeper node
`clickhouse-keeper --force-recovery` can't start properly, cause still wait 2 other nodes, even when have only one node in peer list in XML config
```
<Information> KeeperDispatcher: Server is recovering, will not apply configuration until recovery is finished
```
> Again, keep in mind that such configuration changes are already possible with the Keeper if there is a quorum by just changing the XML.
I can't to do it inside all other keeper nodes, inside Kubernetes
kubernetes operates pods and containers in declarative manner, I see only one opportunity to use
```yaml
lifecycle:
preStop: |
shell script which will rewrite XML config
```
but it doesn't allow us to change XML configs on other pods during scale down
Moreover, we don't have support `reconfigure add` and `reconfigure remove` zookeeper commands and can't remove Pod from Quorum peer list dynamic
So, it makes --force-recovery useless in Kubernetes ;(
@alesapin maybe you could suggest how to run clickhouse-keeper in Kubernetes with scale down operation support?
| https://github.com/ClickHouse/ClickHouse/issues/37434 | https://github.com/ClickHouse/ClickHouse/pull/37440 | 8d876653e852e4e435924f0e7d388bfdb72d00f6 | ea19f1f18c89f9d0dd5223355046fb660352405f | "2022-05-23T03:22:45Z" | c++ | "2022-05-24T11:03:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,423 | ["src/Storages/MergeTree/DataPartsExchange.cpp"] | Possible deadlock during fetching part (only when I use a non-production feature "zero-copy replication") | There is a possible deadlock in `Fetcher::fetchPart` when fail to do fetching with zero-copy.
```
...
Thread 1612 (Thread 0x7f4437dff700 (LWP 692837)):
#0 0x00007f448499a42d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f4484995dcb in _L_lock_812 () from /lib64/libpthread.so.0
#2 0x00007f4484995c98 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x0000000009ea4491 in pthread_mutex_lock ()
#4 0x0000000015c0fa86 in std::__1::mutex::lock() ()
#5 0x0000000009f7bd1d in DB::makePooledHTTPSession(Poco::URI const&, Poco::URI const&, DB::ConnectionTimeouts const&, unsigned long, bool) ()
#6 0x0000000009f7bc9c in DB::makePooledHTTPSession(Poco::URI const&, DB::ConnectionTimeouts const&, unsigned long, bool) ()
#7 0x0000000012bd9611 in DB::UpdatablePooledSession::UpdatablePooledSession(Poco::URI, DB::ConnectionTimeouts const&, unsigned long, unsigned long) ()
#8 0x0000000012bd9422 in std::__1::shared_ptr<DB::UpdatablePooledSession> std::__1::allocate_shared<DB::UpdatablePooledSession, std::__1::allocator<DB::UpdatablePooledSession>, Poco::URI&, DB::ConnectionTimeouts const&, unsigned long const&, unsigned long&, void>(std::__1::allocator<DB::UpdatablePooledSession> const&, Poco::URI&, DB::ConnectionTimeouts const&, unsigned long const&, unsigned long&) ()
#9 0x0000000012bd6b85 in DB::PooledReadWriteBufferFromHTTP::PooledReadWriteBufferFromHTTP(Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>, DB::ConnectionTimeouts const&, Poco::Net::HTTPBasicCredentials const&, unsigned long, unsigned long, unsigned long) ()
#10 0x0000000012bce80d in DB::DataPartsExchange::Fetcher::fetchPart(std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, DB::ConnectionTimeouts const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Throttler>, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::optional<DB::CurrentlySubmergingEmergingTagger>*, bool, std::__1::shared_ptr<DB::IDisk>) ()
#11 0x0000000012a9d7f5 in std::__1::shared_ptr<DB::IMergeTreeDataPart> std::__1::__function::__policy_invoker<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19, std::__1::shared_ptr<DB::IMergeTreeDataPart> ()> >(std::__1::__function::__policy_storage const*) ()
#12 0x0000000012a2796f in DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>) ()
#13 0x0000000012a1c23d in DB::StorageReplicatedMergeTree::executeFetch(DB::ReplicatedMergeTreeLogEntry&) ()
#14 0x0000000012da989d in DB::ReplicatedMergeMutateTaskBase::executeImpl() ()
#15 0x0000000012da8be2 in DB::ReplicatedMergeMutateTaskBase::executeStep() ()
#16 0x0000000012c3f20b in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(std::__1::shared_ptr<DB::TaskRuntimeData>) ()
#17 0x0000000012c3ff7b in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::threadFunction() ()
#18 0x0000000009ea768d in ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) ()
#19 0x0000000009ea8f50 in ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}::operator()() ()
#20 0x0000000009ea5c0e in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) ()
#21 0x0000000009ea80ae in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}> >(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>) ()
#22 0x00007f4484993e25 in start_thread () from /lib64/libpthread.so.0
#23 0x00007f44846c035d in clone () from /lib64/libc.so.6
...
Thread 1607 (Thread 0x7f44355fa700 (LWP 692842)):
#0 0x00007f448499a42d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f4484995dcb in _L_lock_812 () from /lib64/libpthread.so.0
#2 0x00007f4484995c98 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x0000000009ea4491 in pthread_mutex_lock ()
#4 0x0000000015c0fa86 in std::__1::mutex::lock() ()
#5 0x0000000009f7bd1d in DB::makePooledHTTPSession(Poco::URI const&, Poco::URI const&, DB::ConnectionTimeouts const&, unsigned long, bool) ()
#6 0x0000000009f7bc9c in DB::makePooledHTTPSession(Poco::URI const&, DB::ConnectionTimeouts const&, unsigned long, bool) ()
#7 0x0000000012bd9611 in DB::UpdatablePooledSession::UpdatablePooledSession(Poco::URI, DB::ConnectionTimeouts const&, unsigned long, unsigned long) ()
#8 0x0000000012bd9422 in std::__1::shared_ptr<DB::UpdatablePooledSession> std::__1::allocate_shared<DB::UpdatablePooledSession, std::__1::allocator<DB::UpdatablePooledSession>, Poco::URI&, DB::ConnectionTimeouts const&, unsigned long const&, unsigned long&, void>(std::__1::allocator<DB::UpdatablePooledSession> const&, Poco::URI&, DB::ConnectionTimeouts const&, unsigned long const&, unsigned long&) ()
#9 0x0000000012bd6b85 in DB::PooledReadWriteBufferFromHTTP::PooledReadWriteBufferFromHTTP(Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>, DB::ConnectionTimeouts const&, Poco::Net::HTTPBasicCredentials const&, unsigned long, unsigned long, unsigned long) ()
#10 0x0000000012bce80d in DB::DataPartsExchange::Fetcher::fetchPart(std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, DB::ConnectionTimeouts const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Throttler>, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::optional<DB::CurrentlySubmergingEmergingTagger>*, bool, std::__1::shared_ptr<DB::IDisk>) ()
#11 0x0000000012bd06d9 in DB::DataPartsExchange::Fetcher::fetchPart(std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, DB::ConnectionTimeouts const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Throttler>, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::optional<DB::CurrentlySubmergingEmergingTagger>*, bool, std::__1::shared_ptr<DB::IDisk>) ()
#12 0x0000000012a9d7f5 in std::__1::shared_ptr<DB::IMergeTreeDataPart> std::__1::__function::__policy_invoker<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19, std::__1::shared_ptr<DB::IMergeTreeDataPart> ()> >(std::__1::__function::__policy_storage const*) ()
#13 0x0000000012a2796f in DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>) ()
#14 0x0000000012a1c23d in DB::StorageReplicatedMergeTree::executeFetch(DB::ReplicatedMergeTreeLogEntry&) ()
#15 0x0000000012da989d in DB::ReplicatedMergeMutateTaskBase::executeImpl() ()
#16 0x0000000012da8be2 in DB::ReplicatedMergeMutateTaskBase::executeStep() ()
#17 0x0000000012c3f20b in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(std::__1::shared_ptr<DB::TaskRuntimeData>) ()
#18 0x0000000012c3ff7b in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::threadFunction() ()
#19 0x0000000009ea768d in ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) ()
#20 0x0000000009ea8f50 in ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}::operator()() ()
#21 0x0000000009ea5c0e in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) ()
#22 0x0000000009ea80ae in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}> >(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>) ()
#23 0x00007f4484993e25 in start_thread () from /lib64/libpthread.so.0
#24 0x00007f44846c035d in clone () from /lib64/libc.so.6
...
```
As the stack trace shows, there is a recursive call to `fetchPart` when fail to do fetch with zero-copy. Each call requires obtain an http session from a session pool which has a maximum size limit. If there are many threads calling `fetchPart` simultaneously, the pool may be exhausted on the first call. And all threads will block on the second, eventually leading to a deadlock.
| https://github.com/ClickHouse/ClickHouse/issues/37423 | https://github.com/ClickHouse/ClickHouse/pull/37424 | ff98c24d44dc1ef708b7b350c1a99ad9517d578d | 51868a9a4ff1f7d77757cb4172d5b50962e5d88d | "2022-05-22T10:27:55Z" | c++ | "2022-05-25T18:15:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,420 | ["src/Processors/Formats/Impl/BinaryRowInputFormat.cpp", "src/Processors/Formats/Impl/CSVRowInputFormat.cpp", "src/Processors/Formats/Impl/CustomSeparatedRowInputFormat.cpp", "src/Processors/Formats/Impl/JSONCompactEachRowRowInputFormat.cpp", "src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp", "src/Processors/Formats/RowInputFormatWithNamesAndTypes.cpp", "src/Processors/Formats/RowInputFormatWithNamesAndTypes.h", "tests/queries/0_stateless/02306_rowbinary_has_no_bom.reference", "tests/queries/0_stateless/02306_rowbinary_has_no_bom.sh"] | HTTP body in RowBinary format started with BOM (0xEFBBBF) leads to `Cannot read all data` | > You have to provide the following information whenever possible.
Batches started with `0xEFBBBF` cannot be inserted in the `RowBinary` format using HTTP.
> A clear and concise description of what works not as it is supposed to.
**Does it reproduce on recent release?**
On the latest available on the docker hub, `22.1.3 revision 54455`.
**How to reproduce**
1. Create a table:
```sql
CREATE TABLE some(no UInt64) ENGINE = MergeTree ORDER BY no;
```
2. Write `1653178571234567890`:
```
echo -en '\xd2\x66\xb9\xd0\x26\x45\xf1\x16' | curl -v --data-binary @- 'http://127.0.0.1:8123/?query=INSERT%20INTO%20some(no)%20FORMAT%20RowBinary'
```
```
* Trying 127.0.0.1:8123...
* Connected to 127.0.0.1 (127.0.0.1) port 8123 (#0)
> POST /?query=INSERT%20INTO%20some(no)%20FORMAT%20RowBinary HTTP/1.1
> Host: 127.0.0.1:8123
> User-Agent: curl/7.83.0
> Accept: */*
> Content-Length: 8
> Content-Type: application/x-www-form-urlencoded
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Sun, 22 May 2022 00:53:08 GMT
< Connection: Keep-Alive
< Content-Type: text/plain; charset=UTF-8
< X-ClickHouse-Server-Display-Name: 4f17bcf5935b
< Transfer-Encoding: chunked
< Keep-Alive: timeout=3
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"}
<
* Connection #0 to host 127.0.0.1 left intact
```
Works fine.
3. Write `1651760768976141295` (actually, any number started with `0xEFBBBF`):
```
echo -en '\xef\xbb\xbf\x00\xab\x3b\xec\x16' | curl -v --data-binary @- 'http://127.0.0.1:8123/?query=INSERT%20INTO%20some(no)%20FORMAT%20RowBinary'
```
```
* Trying 127.0.0.1:8123...
* Connected to 127.0.0.1 (127.0.0.1) port 8123 (#0)
> POST /?query=INSERT%20INTO%20some(no)%20FORMAT%20RowBinary HTTP/1.1
> Host: 127.0.0.1:8123
> User-Agent: curl/7.83.0
> Accept: */*
> Content-Length: 8
> Content-Type: application/x-www-form-urlencoded
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< Date: Sun, 22 May 2022 00:54:26 GMT
< Connection: Keep-Alive
< Content-Type: text/plain; charset=UTF-8
< X-ClickHouse-Server-Display-Name: 4f17bcf5935b
< Transfer-Encoding: chunked
< X-ClickHouse-Exception-Code: 33
< Keep-Alive: timeout=3
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"}
<
Code: 33. DB::Exception: Cannot read all data. Bytes read: 5. Bytes expected: 8.: (at row 1)
: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 22.1.3.7 (official build))
* Connection #0 to host 127.0.0.1 left intact
``` | https://github.com/ClickHouse/ClickHouse/issues/37420 | https://github.com/ClickHouse/ClickHouse/pull/37428 | 8c0dba73024ae24f54fca28ca278f4d6e3dd8adc | 251be860e7dfcd481ec896ae23f1146806eec4d2 | "2022-05-22T01:06:59Z" | c++ | "2022-06-01T11:36:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,419 | ["src/Functions/FunctionShowCertificate.cpp", "src/Functions/FunctionShowCertificate.h", "src/Functions/registerFunctions.cpp"] | Implement function to inspect server's SSL certificate | **Use case**
Check and verify server's SSL certificate
**Describe the solution you'd like**
Some function like showCertificate to populate standard certificate fields...
**Additional context**
Addition following #36583
| https://github.com/ClickHouse/ClickHouse/issues/37419 | https://github.com/ClickHouse/ClickHouse/pull/37540 | c6b20cd5edb8d9517fdaf5176e86025ddbbb83cb | 873ac9f8ff4659d490c60f481491614a2af6d5e9 | "2022-05-21T21:28:20Z" | c++ | "2022-05-31T06:50:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,401 | ["src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/02304_grouping_set_order_by.reference", "tests/queries/0_stateless/02304_grouping_set_order_by.sql"] | grouping sets + order by (column `time` is not under aggregate) | ```sql
SELECT toStartOfHour(time) AS timex, id, count()
FROM
(
SELECT
concat('id', toString(number % 3)) AS id,
toDateTime('2020-01-01') + (number * 60) AS time
FROM numbers(100)
)
GROUP BY
GROUPING SETS ( (timex, id), (timex))
ORDER BY timex ASC;
Received exception from server (version 22.5.1):
Code: 215. DB::Exception: Received from localhost:9000.
DB::Exception: Column `time` is not under aggregate function and not in GROUP BY:
While processing time ASC. (NOT_AN_AGGREGATE)
``` | https://github.com/ClickHouse/ClickHouse/issues/37401 | https://github.com/ClickHouse/ClickHouse/pull/37493 | f321925032fd2de1cb3b308fdf9ae71aaf1c267f | 5c3c994d2a47cc05d044f7f645623c7445dd771b | "2022-05-20T20:56:19Z" | c++ | "2022-05-26T00:25:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,395 | ["src/Functions/generateUUIDv4.cpp", "tests/queries/0_stateless/02310_generate_multi_columns_with_uuid.reference", "tests/queries/0_stateless/02310_generate_multi_columns_with_uuid.sql"] | generateUUIDv4(tag argument) | to make it possible to generate two columns with UUID.
```
select generateUUIDv4(1), generateUUIDv4(2);
ββgenerateUUIDv4(1)βββββββββββββββββββββ¬βgenerateUUIDv4(2)βββββββββββββββββββββ
β bc834744-236b-4860-a961-4f8aafcae2b5 β 37e7908c-e1d4-491f-b34d-dee0de7794df β
ββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ
```
the same as rand() https://clickhouse.com/docs/en/sql-reference/functions/random-functions#rand
_x β [Expression](https://clickhouse.com/docs/en/sql-reference/syntax#syntax-expressions) resulting in any of the [supported data types](https://clickhouse.com/docs/en/sql-reference/data-types/#data_types). The resulting value is discarded, but the expression itself if used for bypassing [common subexpression elimination](https://clickhouse.com/docs/en/sql-reference/functions/#common-subexpression-elimination) if the function is called multiple times in one query. Optional parameter._
https://github.com/ClickHouse/ClickHouse/issues/37366 | https://github.com/ClickHouse/ClickHouse/issues/37395 | https://github.com/ClickHouse/ClickHouse/pull/37415 | b3ee8114d924772d2f164bca7e8badd8328dddc3 | 698e5e5352ac498b14753cd67f585c9c8d4b4df4 | "2022-05-20T19:33:56Z" | c++ | "2022-05-22T21:29:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,381 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/ExpressionAnalyzer.h", "tests/queries/0_stateless/02354_read_in_order_prewhere.reference", "tests/queries/0_stateless/02354_read_in_order_prewhere.sql"] | Exception with optimize_move_to_prewhere = 1 | ```
select version();
ββversion()ββββ
β 22.5.1.2079 β
βββββββββββββββ
CREATE TABLE order
(
`ID` String,
`Type` Enum8('TYPE_0' = 0, 'TYPE_1' = 1, 'TYPE_2' = 2),
`Num` UInt64,
`Data` String,
`RowCreatedAt` DateTime DEFAULT now()
)
ENGINE = ReplacingMergeTree()
PARTITION BY toYYYYMMDD(RowCreatedAt)
PRIMARY KEY ID
ORDER BY (ID, Type, Num)
TTL RowCreatedAt + toIntervalWeek(6)
SETTINGS index_granularity = 8192;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
insert into order (ID, Type, Num, Data, RowCreatedAt) select toString(cityHash64(ID)%2000), case cityHash64(ID)%3 when 0 then 'TYPE_0' when 1 then 'TYPE_1' when 2 then 'TYPE_2' ELSE 'TYPE_0' END, cityHash64(ID), ID, toDateTime(toUInt32(now()) - round(rand32() / 4294967295 * 4100000, 0)) from generateRandom('ID String ', 1, 1000) limit 100000;
select count(*) from order;
ββcount()ββ
β 892441 β
βββββββββββ
set optimize_move_to_prewhere = 0;
SELECT Data
FROM order
WHERE (ID = '1') AND (Type = 'TYPE_1')
ORDER BY Num ASC
FORMAT `Null`
Ok.
set optimize_move_to_prewhere = 1;
SELECT Data
FROM order
WHERE (ID = '1') AND (Type = 'TYPE_1')
ORDER BY Num ASC
FORMAT `Null`
Received exception from server (version 22.5.1):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column Type in block. There are only columns: ID, Num, equals(Type, 'TYPE_1'), Data. (NOT_FOUND_COLUMN_IN_BLOCK)
``` | https://github.com/ClickHouse/ClickHouse/issues/37381 | https://github.com/ClickHouse/ClickHouse/pull/39157 | c669721d46519a51c9e61c40bee8bd1fa8a7995a | f6a82a6a5345fd36cd5198e09c4ad8398aa4d310 | "2022-05-20T07:42:51Z" | c++ | "2022-07-15T14:37:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,368 | ["src/Storages/System/StorageSystemCertificates.cpp"] | Possible memory leaks in system.certificates implementation | Memory leaks in system.certificates implementation due to possible exception. Example:
https://github.com/ClickHouse/ClickHouse/pull/37142/files#diff-1e24e43bfc38cec9770c3cc831b14d045c8138a60d7b77de8d32d96f4a2387e7R110
ref #37142 | https://github.com/ClickHouse/ClickHouse/issues/37368 | https://github.com/ClickHouse/ClickHouse/pull/37407 | d878f193d81730f413196d60531ac7adffec26cf | 790f442362f68374037c225000779ae6e293c407 | "2022-05-19T14:59:41Z" | c++ | "2022-05-21T21:15:30Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,360 | ["src/Columns/IColumn.h", "src/DataTypes/IDataType.cpp", "src/DataTypes/IDataType.h", "src/Processors/QueryPlan/AggregatingStep.cpp", "src/Processors/Transforms/CubeTransform.cpp", "src/Processors/Transforms/RollupTransform.cpp", "tests/queries/0_stateless/02313_group_by_modifiers_with_non-default_types.reference", "tests/queries/0_stateless/02313_group_by_modifiers_with_non-default_types.sql"] | Enum without zero value is not supported in WITH CUBE, WITH ROLLUP and GROUPING SETS | ```
play-eu :) SELECT actor_login, event_type, count() FROM github_events WHERE repo_name = 'ClickHouse/ClickHouse' GROUP BY actor_login, event_type WITH ROLLUP ORDER BY count() DESC LIMIT 10
SELECT
actor_login,
event_type,
count()
FROM github_events
WHERE repo_name = 'ClickHouse/ClickHouse'
GROUP BY
actor_login,
event_type
WITH ROLLUP
ORDER BY count() DESC
LIMIT 10
Query id: 1b7baa06-2ee6-4ae8-b1ed-4c9ecb21514c
Ok.
Exception on client:
Code: 36. DB::Exception: Code: 36. DB::Exception: Unexpected value 0 in enum. (BAD_ARGUMENTS) (version 22.5.1.1984 (official build)). (BAD_ARGUMENTS)
``` | https://github.com/ClickHouse/ClickHouse/issues/37360 | https://github.com/ClickHouse/ClickHouse/pull/37667 | 04e25bc044aeb79edd6e05bd847f568b5a4a19a1 | ab9fc572d536ec9b3b7b2ac8628a3824bdeeda4c | "2022-05-19T13:06:23Z" | c++ | "2022-06-15T02:14:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,333 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02462_match_regexp_pk.reference", "tests/queries/0_stateless/02462_match_regexp_pk.sql"] | `match` function can use index if it's a condition on string prefix. | `SELECT ... WHERE match(key, '^prefix...')`
should use index similarly to
`SELECT ... WHERE key LIKE 'prefix%'` | https://github.com/ClickHouse/ClickHouse/issues/37333 | https://github.com/ClickHouse/ClickHouse/pull/42458 | d18d08bcc78431977abe735e3bc6793e6e2308c2 | bab0e06e3d988b02840cb089fc64e53b83612a04 | "2022-05-18T18:27:50Z" | c++ | "2022-10-31T21:33:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,331 | ["docs/en/sql-reference/statements/create/user.md", "docs/ru/sql-reference/statements/create/user.md", "src/Parsers/Access/ParserCreateUserQuery.cpp", "src/Parsers/tests/gtest_Parser.cpp"] | CREATE USER fails with "ON CLUSTER" and "sha256_password" | I'm trying to create a new user on a clickhouse cluster with 3 shards using this command:
`CREATE USER IF NOT EXISTS user ON CLUSTER '{cluster}' IDENTIFIED WITH sha256_password BY 'password' DEFAULT ROLE NONE`
This command gets expanded to:
`CREATE USER IF NOT EXISTS user ON CLUSTER {cluster} IDENTIFIED WITH sha256_hash BY '<hash>' SALT '<salt>' DEFAULT ROLE NONE`
However, on each of the shard I get this error:
`Syntax error: failed at position 123 ('SALT'): SALT '<salt>' DEFAULT ROLE NONE. Expected one of: HOST, SETTINGS, DEFAULT ROLE, GRANTEES, DEFAULT DATABASE, end of query.`
However, if I switch to use `plaintext_password` instead, everything works fine. Looks like the expansion and the addition of the "SALT" was not expected when handling this command.
* Which ClickHouse server version to use: 22.4
**Expected behavior**
The user to be created with no errors.
Thanks! | https://github.com/ClickHouse/ClickHouse/issues/37331 | https://github.com/ClickHouse/ClickHouse/pull/37377 | 3dfa30dedd65505b654a9f345448d54dabf24fce | ce1df15e1c757768e5f60d2bd8542c0b41a447ad | "2022-05-18T16:46:03Z" | c++ | "2022-05-21T18:36:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,329 | ["src/Interpreters/Context.cpp", "src/Interpreters/Context.h"] | DataTypeFactory::get locks a mutex from context. | `DataTypeFactory::get` locks a mutex inside shared a context, there was a case when many threads were locked inside it
https://pastila.nl/?0043b3e6/c557368f4cf0eb87a3c6f2cc162bc693
It is not clear which thread locked the mutex for a long time. However, it may be possible to lock less inside `DataTypeFactory::get`. | https://github.com/ClickHouse/ClickHouse/issues/37329 | https://github.com/ClickHouse/ClickHouse/pull/37532 | 3a92e61827c1e060a6214fd5dbdf1cfe165c4b71 | fea2401f1fa4985ef6ee918fc835f22fde815862 | "2022-05-18T15:12:52Z" | c++ | "2022-05-26T11:03:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,298 | ["src/Functions/normalizeString.cpp", "tests/queries/0_stateless/02311_normalize_utf8_constant.reference", "tests/queries/0_stateless/02311_normalize_utf8_constant.sql"] | hex(normalizeUTF8NFC LOGICAL_ERROR | ```sql
SELECT hex(normalizeUTF8NFC('Γ’'))
Received exception from server (version 22.4.5):
Code: 49. DB::Exception: Received from localhost:9000.
DB::Exception: Column size mismatch (internal logical error):
While processing hex(normalizeUTF8NFC('Γ’')). (LOGICAL_ERROR)
```
```sql
SELECT
'Γ’' AS s,
normalizeUTF8NFC(s) s1,
normalizeUTF8NFD(s) s2,
normalizeUTF8NFKC(s) s3,
normalizeUTF8NFKD(s) s4,
hex(s),
hex(s1),
hex(s2),
hex(s3),
hex(s4)
``` | https://github.com/ClickHouse/ClickHouse/issues/37298 | https://github.com/ClickHouse/ClickHouse/pull/37443 | e33cfc889cdb5371749afb8b07db470946781fec | 712b000f2aa6db53a1dd174fc73768b29cfab11e | "2022-05-17T15:58:45Z" | c++ | "2022-05-24T09:11:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,280 | ["src/Access/ContextAccess.cpp", "tests/queries/0_stateless/02315_readonly_create_function.reference", "tests/queries/0_stateless/02315_readonly_create_function.sh"] | Users can create functions in the case of readonly. | clickhouse version: 22.3.6.5
users.yaml :
```
public_ro:
password: "test"
networks:
ip: '::/0'
profile: readonly
quota: default
allow_databases:
- database: public
```
exec result:
```
clickhouse-0-0.clickhouse-headless.default.svc.cluster.local :) create table test(test Uint32) engine=memory();
CREATE TABLE test
(
`test` Uint32
)
ENGINE = memory
Query id: c976c324-efa1-418d-a6de-b615a9f8947d
0 rows in set. Elapsed: 0.001 sec.
Received exception from server (version 22.3.6):
Code: 164. DB::Exception: Received from localhost:9000. DB::Exception: public_ro: Cannot execute query in readonly mode. (READONLY)
clickhouse-0-0.clickhouse-headless.default.svc.cluster.local :) create function test as a -> a*2;
CREATE FUNCTION test AS a -> (a * 2)
Query id: b5b59b1c-5209-4de6-b521-9d6b50f5ce33
Ok.
0 rows in set. Elapsed: 0.003 sec.
clickhouse-0-0.clickhouse-headless.default.svc.cluster.local :) drop function test;
DROP FUNCTION test
Query id: ee5f2061-4ff0-4627-bd3f-c7052406d484
Ok.
0 rows in set. Elapsed: 0.001 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/37280 | https://github.com/ClickHouse/ClickHouse/pull/37699 | 8c40f05d4cba0583339b10c98537307329df942e | 1d9c8351a0703231938fcb4b2d90cea192f6f439 | "2022-05-17T06:28:22Z" | c++ | "2022-06-02T09:04:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,274 | ["src/Dictionaries/HTTPDictionarySource.cpp", "src/Storages/ExternalDataSourceConfiguration.cpp", "src/Storages/ExternalDataSourceConfiguration.h", "tests/integration/test_storage_dict/__init__.py", "tests/integration/test_storage_dict/configs/conf.xml", "tests/integration/test_storage_dict/test.py"] | Add support for HTTP source for Data Dictionaries in Named Collections | **Use case**
For creating external data dictionary with settings (url, headers, etc.) from config.xml `<named_collections>` for HTTP sources.
This is to allow updating per environment test, staging, prod without updating `CREATE DICTIONARY` statement, values set externally.
**Describe the solution you'd like**
Add HTTP source options as currently available for PostgreSQL, MySQL, S3 and Kafka:
https://clickhouse.com/docs/en/operations/named-collections/
keys should be `url`, `user`,`password`, `headers`.
**Describe alternatives you've considered**
use different data source for named collections.
use custom settings.
cc: @kssenii
| https://github.com/ClickHouse/ClickHouse/issues/37274 | https://github.com/ClickHouse/ClickHouse/pull/37581 | 7fbe91ca810ef70c5e083278fcab817cad6d28c7 | e23cec01d525dc7423d16144319fa432274ba5b9 | "2022-05-16T22:16:28Z" | c++ | "2022-06-01T15:55:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,237 | ["src/Storages/StorageMaterializedView.cpp", "src/Storages/StorageMaterializedView.h"] | Exception when insert into materialized view | > You have to provide the following information whenever possible.
Exception when insert into materialized view
> A clear and concise description of what works not as it is supposed to.
**Does it reproduce on recent release?**
Reproduce on `ClickHouse 22.5.1.1 with revision 54462`
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
**Enable crash reporting**
**How to reproduce**
* Which ClickHouse server version to use: ClickHouse 22.5.1.1 with revision 54462
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
---
1. Remove clickhouse data directory
2. Start clickhouse server
3. create the following tables
```
CREATE TABLE mt(a Int32, timestamp DateTime) ENGINE=MergeTree ORDER BY tuple();
CREATE MATERIALIZED VIEW mv ENGINE MergeTree ORDER BY tuple() AS SELECT count(a) AS count FROM mt;
```
4. Drop table mv
```
DROP TABLE mv
```
5. Terminate server by ctrl-c, then we can find `default.mv.UUID-OLD.sql` in `/metadata_dropped`
6. Start the server, a dependency of mt to UUID-OLD will be added by StorageMaterializedView:132 (might from metadata_dropped)
7. Recreate mv, and the server will add a dependency of mt to UUID-new-mv. Thus there are two dependencies in server, 'mv -> UUID-OLD' and 'mv -> UUID-new-mv`
```
CREATE MATERIALIZED VIEW mv ENGINE MergeTree ORDER BY tuple() AS SELECT count(a) AS count FROM mt;
```
8. Insert data into mt, a exception will be caused by the old dependency.
```
:) INSERT INTO mt VALUES (1, now());
INSERT INTO mt FORMAT Values
Query id: 06910fcc-ba7c-47a2-b821-0ff959988f59
0 rows in set. Elapsed: 0.024 sec.
Received exception from server (version 22.5.1):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.mv (182c0880-5b8e-4972-aebc-b992d3654c06) doesn't exist. (UNKNOWN_TABLE)
```
> A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
> If applicable, add screenshots to help explain your problem.
```
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.mv (182c0880-5b8e-4972-aebc-b992d3654c06) doesn't exist. (UNKNOWN_TABLE)
```
**Additional context**
> Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/37237 | https://github.com/ClickHouse/ClickHouse/pull/37243 | 87d445295f215e27b694dfa835d52f01328fb469 | b6bf283f4d5ac6c54a8db9c99bbeb28c8921979c | "2022-05-16T07:42:39Z" | c++ | "2022-05-23T10:33:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,231 | ["tests/queries/0_stateless/02311_range_hashed_dictionary_range_cast.reference", "tests/queries/0_stateless/02311_range_hashed_dictionary_range_cast.sql"] | RANGE_HASHED throws Value in column UInt64 cannot be safely converted into type Int64 | ```sql
21.8.13
create table dict_source (key UInt64, start UInt64, end UInt64, x Float64) Engine=Memory;
insert into dict_source values (1, 0, 18446744073709551615, inf);
CREATE DICTIONARY dict ( key UInt64, start UInt64, end UInt64, x Float64 )
PRIMARY KEY key SOURCE(CLICKHOUSE(TABLE dict_source DB 'default'))
LIFETIME(MIN 0 MAX 0) LAYOUT(range_hashed())
RANGE(MIN start MAX end);
select dictGet('dict', 'x', toUInt64(1), toUInt64(-1));
DB::Exception: Value in column UInt64 cannot be safely converted into type Int64:
While processing dictGet('dict', 'x', toUInt64(1), toUInt64(-1)).
```
It worked in 20.8.
```
22.4.5.8.
select dictGet('dict', 'x', toUInt64(1), toUInt64(-1));
ββdictGet('dict', 'x', toUInt64(1), toUInt64(-1))ββ
β inf β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
```
I'll add a test. | https://github.com/ClickHouse/ClickHouse/issues/37231 | https://github.com/ClickHouse/ClickHouse/pull/37449 | 008de5c779b442af64e8337a45894d01a63e4ac6 | 9de54040ad02445893c51336b5cefcb92da25c3b | "2022-05-16T02:59:05Z" | c++ | "2022-05-23T14:49:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,229 | ["src/Interpreters/InterpreterCreateQuery.cpp", "src/Parsers/ParserCreateQuery.h", "tests/queries/0_stateless/01269_create_with_null.reference", "tests/queries/0_stateless/01269_create_with_null.sql", "tests/queries/0_stateless/02302_column_decl_null_before_defaul_value.reference", "tests/queries/0_stateless/02302_column_decl_null_before_defaul_value.sql"] | Column declaration: [NOT] NULL right after type | Execution of this query
```
create table t (c1 int(11) DEFAULT 1 NULL) engine=MergeTree order by tuple()
```
will return parsed query
```
CREATE TABLE t
(
`c1` int NULL DEFAULT 1
)
ENGINE = MergeTree
ORDER BY tuple()
```
In the parsed query, NULL comes before DEFAULT, in the initial - other way around.
If I try to execute the parsed query, Iβll get the following error:
```
Syntax error: failed at position 36 ('DEFAULT') (line 3, col 19):
CREATE TABLE t
(
`c1` int NULL DEFAULT 1
)
ENGINE = MergeTree
ORDER BY tuple()
Expected one of: COMMENT, CODEC, TTL, token, Comma, ClosingRoundBracket
```
This is confusing.
See also discussion in #37087
Considerations:
(1) considering simplification of migration from other databases (see, for example, #37178) probably it makes sense to keep support for old column declaration, - MySQL supports both ways: `NULL DEFAULT 1` and `DEFAULT 1 NULL`, even `COMMENT '1' NULL` which we don't support. This way it'll keep the change backward compatible
(2) from another side ... I guess ... there is no target to support all variants of different syntax. It's not desirable to complicate parser implementation just for this reason
| https://github.com/ClickHouse/ClickHouse/issues/37229 | https://github.com/ClickHouse/ClickHouse/pull/37337 | 2ff747785ed6eb6da8911975f2673f976db48dd6 | 04e2737a572fbc9e9baf9a5b735a906458fd1e2b | "2022-05-15T21:05:06Z" | c++ | "2022-05-24T19:16:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,205 | ["src/Interpreters/MutationsInterpreter.cpp", "src/Storages/MergeTree/MergeTask.cpp", "src/Storages/MergeTree/MutateTask.cpp", "src/Storages/MergeTree/StorageFromMergeTreeDataPart.h", "tests/queries/0_stateless/01825_type_json_mutations.reference", "tests/queries/0_stateless/01825_type_json_mutations.sql"] | Records with a JSON column aren't deleted correctly | **Describe what's wrong**
Record with a JSON column seems to not be deleted correctly.
**Does it reproduce on recent release?**
Yes
**How to reproduce**
First let's create the table with a JSON column.
```
CREATE TABLE testJson
(
`id` UInt32,
`fields` JSON
)
ENGINE = ReplicatedMergeTree
ORDER BY id
SETTINGS index_granularity = 8192
```
Let's add a row to the table:
```
INSERT INTO testJson (id, fields) VALUES (1,'{"fieldInteger":1,"fieldBoolean":true,"fieldFloat":1.23,"fieldString":"StringValue"}');
```
Let's check that the row has been properly inserted.
```
SELECT *
FROM testJson
Query id: 62875aa8-67fc-46ee-80b3-382346bbf255
ββidββ¬βfieldsββββββββββββββββββββ
β 1 β (1,1.23,1,'StringValue') β
ββββββ΄βββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.158 sec.
```
Then I delete the element
```
ALTER TABLE testJson
DELETE WHERE id = 1
Query id: 288c33ed-75c4-42fc-ac04-bbce2cc325e3
ββhostβββββββββββββββββββββββββββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β default|host-1 β 0 β β 1 β 1 β
βββββββββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
ββhostβββββββββββββββββββββββββββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β default|host-2 β 0 β β 0 β 0 β
βββββββββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
2 rows in set. Elapsed: 0.366 sec.
```
Let's verify that the record has been deleted:
```
SELECT *
FROM testJson
Query id: 62875aa8-67fc-46ee-80b3-382346bbf255
ββidββ¬βfieldsββββββββββββββββββββ
β 1 β (1,1.23,1,'StringValue') β
ββββββ΄βββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.158 sec.
```
The record is still there.
**Expected behavior**
The row should have been deleted like in the below example:
```
CREATE TABLE testString
(
`id` UInt32,
`fields` String
)
ENGINE = ReplicatedMergeTree
ORDER BY id
SETTINGS index_granularity = 8192
```
Let's insert a row:
```
INSERT INTO testString (id, fields) VALUES (1,'test');
```
Let's check that the record is there:
```
SELECT *
FROM testString
Query id: b2f8f509-3633-43fc-9be7-a797492df0bb
ββidββ¬βfieldsββ
β 1 β test β
ββββββ΄βββββββββ
1 rows in set. Elapsed: 0.202 sec.
```
Let's remove it:
```
ALTER TABLE testJson
DELETE WHERE id = 1
Query id: 10ad5930-9102-4297-96c3-ae1fdef09d50
ββhostβββββββββββββββββββββββββββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β default|host-1 β 0 β β 1 β 1 β
βββββββββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
ββhostβββββββββββββββββββββββββββ¬βstatusββ¬βerrorββ¬βnum_hosts_remainingββ¬βnum_hosts_activeββ
β default|host-2β 0 β β 0 β 0 β
βββββββββββββββββββββββββββββββββ΄βββββββββ΄ββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββββββ
2 rows in set. Elapsed: 0.339 sec.
```
It's being deleted correctly:
```
SELECT *
FROM testString
Query id: fe3c0ea5-cc5a-41c0-be69-04c12d80cd93
Ok.
0 rows in set. Elapsed: 0.156 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/37205 | https://github.com/ClickHouse/ClickHouse/pull/37266 | 492de1076cbf87be99b67ac37ec866faa55c094f | aaace46da25222295d5c3db8119962627d2fc88c | "2022-05-13T18:59:48Z" | c++ | "2022-05-18T10:19:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,141 | ["src/Interpreters/ApplyWithGlobalVisitor.cpp", "tests/queries/0_stateless/02295_global_with_in_subquery.reference", "tests/queries/0_stateless/02295_global_with_in_subquery.sql"] | Exception: Missing columns when union all under with clause in subquery | It's ok when i run
```sql
with
(select count(*) from system.clusters) as v1,
(select count(*) from system.disks) as v2
select v1 as v
union all
select v2 as v
```
result:
```
v |
--+
14|
1|
```
,
but when i wrap it in subquery:
```sql
SELECT v
FROM
(with
(select count(*) from system.clusters) as v1,
(select count(*) from system.disks) as v2
select v1 as v
union all
select v2 as v) as a
```
it raise error:
`SQL Error [47]: ClickHouse exception, code: 47, host: 127.0.0.1, port: 46535; Code: 47, e.displayText() = DB::Exception: Missing columns: 'v2' while processing query: 'SELECT v2 AS v', required columns: 'v2' (version 21.7.3.14 (official build))`
,
in fact , if i drop `union all`, it's still ok:
```sql
SELECT v
FROM
(with
(select count(*) from system.clusters) as v1,
(select count(*) from system.disks) as v2
select v1 as v
) as a
LIMIT 1000;
```
result:
```
v |
--+
14|
```
seems that it cannot recognize `union all` under `with` clause in subquery (version: 21.7.3.14).
Another way to avoid error is `with` first, like this:
```sql
with
(select count(*) from system.clusters) as v1,
(select count(*) from system.disks) as v2
SELECT v
FROM(
select v1 as v
union all
select v2 as v) as a
```
| https://github.com/ClickHouse/ClickHouse/issues/37141 | https://github.com/ClickHouse/ClickHouse/pull/37166 | 7006ef6ebb51ced7c92efc843eb15103e0fd1e78 | 75008d5903cb798112ca0e419abc9c45752f982b | "2022-05-12T03:13:29Z" | c++ | "2022-05-16T10:21:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,099 | ["packages/clickhouse-server.postinstall"] | ClickHouse server fails to start after upgrading from `22.3.6.5-lts` to `22.4.5.9-stable` | Dear ClickHouse maintainers,
Please, kindly find the reported issue below.
# Issue
The ClickHouse server fails to start after upgrading from the `22.3.6.5-lts` version to the `22.4.5.9-stable` version.
# Scenario
## Steps
1. Install the ClickHouse server `22.3.6.5-lts` by using the `*.deb` files (`dpkg -i *.deb`).
2. Create a database. Create a table within the created database. Insert a record into the created table.
3. Upgrade the ClickHouse server to the `22.4.5.9-stable` version by using the `*.deb` files (`dpkg -i *.deb`).
4. Restart the ClickHouse server (`service clickhouse-server restart`).
## Actual result
The ClickHouse server failed to restart:
```
# service clickhouse-server status
β clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2022-05-11 06:31:21 MSK; 1s ago
Process: 17550 ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid (code=exited, status=203/EXEC)
Main PID: 17550 (code=exited, status=203/EXEC)
CPU: 10ms
```
## Expected result
The ClickHouse server restarted successfully.
## Additional details
### `dpkg -i` output (log)
[dpkg-i-22.4.5.9-stable-deb.log](https://github.com/ClickHouse/ClickHouse/files/8670143/dpkg-i-22.4.5.9-stable-deb.log).
### After upgrading: Log files: No output
When performing the restart (and start) attempts, new log records (lines) are not appended to (do not appear in) both log files:
* `/var/log/clickhouse-server/clickhouse-server.log`.
* `/var/log/clickhouse-server/clickhouse-server.err.log`.
### After upgrading: Diagnostics
```
# ls -la /usr/bin/clickhouse-server
lrwxrwxrwx 1 root root 19 May 11 16:01 /usr/bin/clickhouse-server -> /usr/bin/clickhouse
# ls -la /usr/bin/clickhouse
-r-xr-xr-x 1 root root 482748504 May 6 13:20 /usr/bin/clickhouse
# ls -la /etc/clickhouse-server
total 96
drwx------ 4 clickhouse clickhouse 4096 May 11 16:01 .
drwxr-xr-x 150 root root 12288 May 12 13:22 ..
dr-x------ 2 clickhouse clickhouse 4096 Apr 23 14:48 config.d
-r-------- 1 clickhouse clickhouse 61817 May 6 13:03 config.xml
dr-x------ 2 clickhouse clickhouse 4096 Feb 20 06:27 users.d
-r-------- 1 clickhouse clickhouse 6258 May 6 13:03 users.xml
# cat /lib/systemd/system/clickhouse-server.service | grep CapabilityBoundingSet
CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
# cat /etc/systemd/system/clickhouse-server.service | grep CapabilityBoundingSet
CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE
# getcap /usr/bin/clickhouse
/usr/bin/clickhouse cap_net_bind_service,cap_net_admin,cap_ipc_lock,cap_sys_nice=ep
```
Best regards,
Sergey Vyacheslavovich Brunov. | https://github.com/ClickHouse/ClickHouse/issues/37099 | https://github.com/ClickHouse/ClickHouse/pull/39323 | 1842a3fc7a8fdb50491aa97a5b531088ac63fdb9 | a5d5dc2c00047f5b4f2b2e58f1e456c50a7e3522 | "2022-05-11T03:48:13Z" | c++ | "2022-08-03T22:46:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 37,045 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/ExpressionAnalyzer.h", "tests/queries/0_stateless/02281_limit_by_distributed.reference", "tests/queries/0_stateless/02281_limit_by_distributed.sql"] | Error "Cannot create column of type Set." with distributed in + LIMIT 1 BY | 22.4.5
```sql
SELECT dummy
FROM cluster(test_cluster_two_shards, system.one)
WHERE dummy IN (
SELECT dummy
FROM cluster(test_cluster_two_shards, system.one)
)
LIMIT 1 BY dummy
SETTINGS distributed_product_mode = 'local'
Received exception from server (version 22.4.5):
Code: 43. DB::Exception: Received from localhost:9000.
DB::Exception: Cannot create column of type Set. (ILLEGAL_TYPE_OF_ARGUMENT)
```
22.3.4.20
```sql
SELECT dummy
FROM cluster(test_cluster_two_shards, system.one)
WHERE dummy IN (
SELECT dummy
FROM cluster(test_cluster_two_shards, system.one)
)
LIMIT 1 BY dummy
SETTINGS distributed_product_mode = 'local'
ββdummyββ
β 0 β
βββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/37045 | https://github.com/ClickHouse/ClickHouse/pull/37193 | 4def736ecbfdeae0b6b2495dde8cbab317af6918 | 7d88b8162235df5bbef53bdba7074014c375df25 | "2022-05-09T14:34:43Z" | c++ | "2022-05-18T11:28:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,982 | ["programs/server/play.html"] | Play UI: Nullable numbers are not aligned to the right and bar chart is not rendered. | https://play.clickhouse.com/play?user=play#U0VMRUNUIGRhdGFiYXNlLCBuYW1lLCB0b3RhbF9yb3dzLCBmb3JtYXRSZWFkYWJsZVNpemUodG90YWxfYnl0ZXMpIEFTIGIgRlJPTSBzeXN0ZW0udGFibGVzIE9SREVSIEJZIHRvdGFsX3Jvd3MgREVTQw== | https://github.com/ClickHouse/ClickHouse/issues/36982 | https://github.com/ClickHouse/ClickHouse/pull/36988 | 854338d1c8399c3e6f66882549e556e16634dc6e | b2945766d0cfb7efc23ce094b5b4a2703815552c | "2022-05-06T18:14:11Z" | c++ | "2022-05-07T10:06:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,971 | ["src/Daemon/BaseDaemon.cpp"] | Clickhouse Watchdog exits with code 0 if child process killed by SIGKILL | ```
ΠΌΠ°Ρ 06 17:17:08 localhost clickhouse-server[1478172]: 2022.05.06 17:17:08.250884 [ 1478172 ] {} <Fatal> Application: Child process was terminated by signal 9 (KILL). If it is not done by 'forcestop' command or manually, the possible cause is OOM Killer (see 'dmesg' and look at the '/var/log/kern.log' for the details).
ΠΌΠ°Ρ 06 17:17:08 localhost systemd[1]: clickhouse-server.service: Succeeded.
ΠΌΠ°Ρ 06 17:17:08 localhost systemd[1]: clickhouse-server.service: Consumed 2h 15min 18.911s CPU time.
```
In my case, it was EarlyOOM. As you can see, CH terminated normally (exit code 0). it's a bug. It should not. | https://github.com/ClickHouse/ClickHouse/issues/36971 | https://github.com/ClickHouse/ClickHouse/pull/47973 | e5f5088320fd9cda1787db84a6182a9716544246 | 3eb2a1cc5b273319327d96829c8f8e225aa531fc | "2022-05-06T15:42:40Z" | c++ | "2023-03-24T21:26:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,966 | ["src/Storages/StorageReplicatedMergeTree.cpp"] | Mutation 0000000000 was killed | As I've observed the DDL worker being a bottleneck of operations when working with replicas I've decided to give the `pool_size` option a try since the operations queued are related to different tables (and usually even different databases too) to see how things go.
Apparently things work fine (since the caller avoids sending related changes) but today I've seen this exception that could be related:
CH version: `version 22.3.2.1`
Query (not really important since those are tests):
```
ALTER TABLE d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 DELETE WHERE 1 = 1
```
Exception:
```
2022.05.06 09:01:01.303172 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Error> executeQuery: Code: 341. DB::Exception: Mutation 0000000000 was killed. (UNFINISHED) (version 22.3.2.1) (from 127.0.0.1:46444) (in query: ALTER TABLE d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 DELETE WHERE 1 = 1 ), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xa54a504 in /usr/bin/clickhouse
2. DB::checkMutationStatus(std::__1::optional<DB::MergeTreeMutationStatus>&, std::__1::set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x14f8f610 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::waitMutationToFinishOnReplicas(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x14b851ad in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::waitMutation(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long) const @ 0x14bf86ca in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::mutate(DB::MutationCommands const&, std::__1::shared_ptr<DB::Context const>) @ 0x14c1a986 in /usr/bin/clickhouse
6. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x144034bc in /usr/bin/clickhouse
7. DB::InterpreterAlterQuery::execute() @ 0x14401b92 in /usr/bin/clickhouse
8. ? @ 0x148d0c9a in /usr/bin/clickhouse
9. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x148d3fca in /usr/bin/clickhouse
10. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x151cdae7 in /usr/bin/clickhouse
11. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x151d2292 in /usr/bin/clickhouse
12. DB::HTTPServerConnection::run() @ 0x1545979b in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x164b264f in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x164b4aa1 in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x16671e49 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x1666f1a0 in /usr/bin/clickhouse
17. ? @ 0x7efcecf5f609 in ?
18. __clone @ 0x7efcece84163 in ?
2022.05.06 09:01:01.303294 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Error> DynamicQueryHandler: Code: 341. DB::Exception: Mutation 0000000000 was killed. (UNFINISHED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xa54a504 in /usr/bin/clickhouse
2. DB::checkMutationStatus(std::__1::optional<tor<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x14f8f610 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::waitMutationTDB::MergeTreeMutationStatus>&, std::__1::set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocaoFinishOnReplicas(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x14b851ad in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::waitMutation(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long) const @ 0x14bf86ca in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::mutate(DB::MutationCommands const&, std::__1::shared_ptr<DB::Context const>) @ 0x14c1a986 in /usr/bin/clickhouse
6. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x144034bc in /usr/bin/clickhouse
7. DB::InterpreterAlterQuery::execute() @ 0x14401b92 in /usr/bin/clickhouse
8. ? @ 0x148d0c9a in /usr/bin/clickhouse
9. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x148d3fca in /usr/bin/clickhouse
10. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x151cdae7 in /usr/bin/clickhouse
11. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x151d2292 in /usr/bin/clickhouse
12. DB::HTTPServerConnection::run() @ 0x1545979b in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x164b264f in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x164b4aa1 in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x16671e49 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x1666f1a0 in /usr/bin/clickhouse
17. ? @ 0x7efcecf5f609 in ?
18. __clone @ 0x7efcece84163 in ?
(version 22.3.2.1)
```
Log in the replica that received the query:
```
2022.05.06 09:01:01.283220 [ 9825 ] {} <Trace> DynamicQueryHandler: Request URI: /?database=d_test_815b7009495344f7ab177b17c6250b5c&query_id=e1da4a60-2788-43ea-94f1-642b6955dbc1&max_result_bytes=104857600&log_queries=1&optimize_throw_if_noop=1&output_format_json_quote_64bit_integers=0&lock_acquire_timeout=120&wait_end_of_query=1&buffer_size=104857600&max_execution_time=3600&insert_deduplicate=1&mutations_sync=2
2022.05.06 09:01:01.283229 [ 9825 ] {} <Debug> HTTP-Session: 7bee1174-2cac-4d72-bb15-a8262f26cef0 Authenticating user 'default' from 127.0.0.1:46444
2022.05.06 09:01:01.283240 [ 9825 ] {} <Debug> HTTP-Session: 7bee1174-2cac-4d72-bb15-a8262f26cef0 Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2022.05.06 09:01:01.283247 [ 9825 ] {} <Debug> HTTP-Session: 7bee1174-2cac-4d72-bb15-a8262f26cef0 Creating query context from global context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: <NOT SET>
2022.05.06 09:01:01.283441 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Debug> executeQuery: (from 127.0.0.1:46444) ALTER TABLE d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 DELETE WHERE 1 = 1
2022.05.06 09:01:01.283484 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Trace> ContextAccess (default): Access granted: ALTER DELETE ON d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0
2022.05.06 09:01:01.283773 [ 296 ] {} <Trace> DDLWorker: scheduleTasks: initialized=true, size_before_filtering=643, queue_size=643, entries=query-0000000000..query-0000000642, first_failed_task_name=none, current_tasks_size=1, last_current_task=query-0000000641, last_skipped_entry_name=none
2022.05.06 09:01:01.283783 [ 296 ] {} <Debug> DDLWorker: Will schedule 1 tasks starting from query-0000000642
2022.05.06 09:01:01.283788 [ 296 ] {} <Trace> DDLWorker: Checking task query-0000000642
2022.05.06 09:01:01.283856 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Trace> InterpreterSelectQuery: Running 'analyze' second time
2022.05.06 09:01:01.284091 [ 163 ] {} <Debug> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ReplicatedMergeTreeQueue): Pulling 1 entries to queue: log-0000002648 - log-0000002648
2022.05.06 09:01:01.284880 [ 296 ] {} <Debug> DDLWorker: Waiting for queue updates
2022.05.06 09:01:01.284886 [ 1513 ] {} <Debug> DDLWorker: Processing task query-0000000642 (DROP DATABASE IF EXISTS d_test_506dac9b765a492a9da234ef1721ee7e ON CLUSTER tinybird)
2022.05.06 09:01:01.286009 [ 163 ] {} <Debug> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ReplicatedMergeTreeQueue): Pulled 1 entries to queue.
2022.05.06 09:01:01.286065 [ 1513 ] {} <Debug> DDLWorker: Executing query: DROP DATABASE IF EXISTS d_test_506dac9b765a492a9da234ef1721ee7e
2022.05.06 09:01:01.286111 [ 131 ] {} <Trace> MergeFromLogEntryTask: Executing log entry to merge parts 202205_2026_2026_0, 202205_2027_2027_0, 202205_2028_2028_0, 202205_2029_2029_0, 202205_2030_2030_0, 202205_2031_2031_0 to 202205_2026_2031_1
2022.05.06 09:01:01.286145 [ 1513 ] {180a0f02-1ed9-41bf-bdc6-5fa9fdb16800} <Debug> executeQuery: (from 0.0.0.0:0, user: ) /* ddl_entry=query-0000000642 */ DROP DATABASE IF EXISTS d_test_506dac9b765a492a9da234ef1721ee7e
2022.05.06 09:01:01.286159 [ 131 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 80.96 GiB.
2022.05.06 09:01:01.286207 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTask::PrepareStage: Merging 6 parts: from 202205_2026_2026_0 to 202205_2031_2031_0 into Compact
2022.05.06 09:01:01.286304 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2022.05.06 09:01:01.286354 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202205_2026_2026_0, total 1 rows starting from the beginning of the part
2022.05.06 09:01:01.286412 [ 1513 ] {180a0f02-1ed9-41bf-bdc6-5fa9fdb16800} <Debug> DDLWorker: Executed query: DROP DATABASE IF EXISTS d_test_506dac9b765a492a9da234ef1721ee7e
2022.05.06 09:01:01.286423 [ 1513 ] {180a0f02-1ed9-41bf-bdc6-5fa9fdb16800} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2022.05.06 09:01:01.286538 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202205_2027_2027_0, total 1 rows starting from the beginning of the part
2022.05.06 09:01:01.286699 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202205_2028_2028_0, total 1 rows starting from the beginning of the part
2022.05.06 09:01:01.286862 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202205_2029_2029_0, total 1 rows starting from the beginning of the part
2022.05.06 09:01:01.286991 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202205_2030_2030_0, total 1 rows starting from the beginning of the part
2022.05.06 09:01:01.287125 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202205_2031_2031_0, total 1 rows starting from the beginning of the part
2022.05.06 09:01:01.288639 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 6 rows, containing 18 columns (18 merged, 0 gathered) in 0.002465423 sec., 2433.6594572209315 rows/sec., 1.25 MiB/sec.
2022.05.06 09:01:01.288838 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Trace> MergedBlockOutputStream: filled checksums 202205_2026_2031_1 (state Temporary)
2022.05.06 09:01:01.289180 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Trace> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ee0155d3-b36c-48e6-ae71-90a3dc39f414): Renaming temporary part tmp_merge_202205_2026_2031_1 to 202205_2026_2031_1.
2022.05.06 09:01:01.289289 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Trace> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ee0155d3-b36c-48e6-ae71-90a3dc39f414) (MergerMutator): Merged 6 parts: from 202205_2026_2026_0 to 202205_2031_2031_0
2022.05.06 09:01:01.296565 [ 9868 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: POST, Address: 127.0.0.1:46448, User-Agent: tb-internal-query, Length: 51, Content Type: application/x-www-form-urlencoded, Transfer Encoding: identity, X-Forwarded-For: (none)
2022.05.06 09:01:01.296662 [ 9868 ] {} <Trace> DynamicQueryHandler: Request URI: /?database=default&query_id=28907f64-b1e0-42c9-beef-bab7b1fbe45f&max_result_bytes=104857600&log_queries=1&optimize_throw_if_noop=1&output_format_json_quote_64bit_integers=0&lock_acquire_timeout=10&wait_end_of_query=1&buffer_size=104857600&max_execution_time=10
2022.05.06 09:01:01.296743 [ 195 ] {} <Information> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Loading 1 mutation entries: 0000000000 - 0000000000
2022.05.06 09:01:01.296775 [ 9868 ] {} <Debug> HTTP-Session: 0d0eb2ac-64d0-42b7-9041-1321c689a575 Authenticating user 'default' from 127.0.0.1:46448
2022.05.06 09:01:01.296801 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (9931c694-ee0f-4f91-9547-e2d6d13f4511): Created mutation with ID 0000000000
2022.05.06 09:01:01.296825 [ 9868 ] {} <Debug> HTTP-Session: 0d0eb2ac-64d0-42b7-9041-1321c689a575 Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2022.05.06 09:01:01.296841 [ 9868 ] {} <Debug> HTTP-Session: 0d0eb2ac-64d0-42b7-9041-1321c689a575 Creating query context from global context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: <NOT SET>
2022.05.06 09:01:01.297059 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Debug> executeQuery: (from 127.0.0.1:46448) create database IF NOT EXISTS d_f02c61 FORMAT JSON
2022.05.06 09:01:01.297093 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Trace> ContextAccess (default): Access granted: CREATE DATABASE ON d_f02c61.*
2022.05.06 09:01:01.297354 [ 131 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Information> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ee0155d3-b36c-48e6-ae71-90a3dc39f414): The part /clickhouse/tables/01-01/test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980/replicas/clickhouse-02/parts/202205_2026_2031_1 on a replica suddenly appeared, will recheck checksums
2022.05.06 09:01:01.299880 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Information> DatabaseAtomic (d_f02c61): Metadata processed, database d_f02c61 has 0 tables and 0 dictionaries in total.
2022.05.06 09:01:01.299892 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 6.4513e-05 sec
2022.05.06 09:01:01.299898 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Information> TablesLoader: Loading 0 tables with 0 dependency level
2022.05.06 09:01:01.299904 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Information> DatabaseAtomic (d_f02c61): Starting up tables.
2022.05.06 09:01:01.299915 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Debug> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (9931c694-ee0f-4f91-9547-e2d6d13f4511): Waiting for clickhouse-02 to apply mutation 0000000000
2022.05.06 09:01:01.300081 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Debug> DynamicQueryHandler: Done processing query
2022.05.06 09:01:01.300098 [ 9868 ] {28907f64-b1e0-42c9-beef-bab7b1fbe45f} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2022.05.06 09:01:01.300107 [ 9868 ] {} <Debug> HTTP-Session: 0d0eb2ac-64d0-42b7-9041-1321c689a575 Destroying unnamed session of user 94309d50-4f52-5250-31bd-74fecac179db
2022.05.06 09:01:01.301523 [ 131 ] {} <Debug> MemoryTracker: Peak memory usage Mutate/Merge: 4.06 MiB.
2022.05.06 09:01:01.303172 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Error> executeQuery: Code: 341. DB::Exception: Mutation 0000000000 was killed. (UNFINISHED) (version 22.3.2.1) (from 127.0.0.1:46444) (in query: ALTER TABLE d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 DELETE WHERE 1 = 1 ), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xa54a504 in /usr/bin/clickhouse
2. DB::checkMutationStatus(std::__1::optional<DB::MergeTreeMutationStatus>&, std::__1::set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x14f8f610 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::waitMutationToFinishOnReplicas(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x14b851ad in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::waitMutation(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long) const @ 0x14bf86ca in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::mutate(DB::MutationCommands const&, std::__1::shared_ptr<DB::Context const>) @ 0x14c1a986 in /usr/bin/clickhouse
6. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x144034bc in /usr/bin/clickhouse
7. DB::InterpreterAlterQuery::execute() @ 0x14401b92 in /usr/bin/clickhouse
8. ? @ 0x148d0c9a in /usr/bin/clickhouse
9. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x148d3fca in /usr/bin/clickhouse
10. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x151cdae7 in /usr/bin/clickhouse
11. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x151d2292 in /usr/bin/clickhouse
12. DB::HTTPServerConnection::run() @ 0x1545979b in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x164b264f in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x164b4aa1 in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x16671e49 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x1666f1a0 in /usr/bin/clickhouse
17. ? @ 0x7efcecf5f609 in ?
18. __clone @ 0x7efcece84163 in ?
2022.05.06 09:01:01.303294 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Error> DynamicQueryHandler: Code: 341. DB::Exception: Mutation 0000000000 was killed. (UNFINISHED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xa54a504 in /usr/bin/clickhouse
2. DB::checkMutationStatus(std::__1::optional<tor<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x14f8f610 in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::waitMutationTDB::MergeTreeMutationStatus>&, std::__1::set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocaoFinishOnReplicas(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x14b851ad in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::waitMutation(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long) const @ 0x14bf86ca in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::mutate(DB::MutationCommands const&, std::__1::shared_ptr<DB::Context const>) @ 0x14c1a986 in /usr/bin/clickhouse
6. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x144034bc in /usr/bin/clickhouse
7. DB::InterpreterAlterQuery::execute() @ 0x14401b92 in /usr/bin/clickhouse
8. ? @ 0x148d0c9a in /usr/bin/clickhouse
9. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x148d3fca in /usr/bin/clickhouse
10. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x151cdae7 in /usr/bin/clickhouse
11. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x151d2292 in /usr/bin/clickhouse
12. DB::HTTPServerConnection::run() @ 0x1545979b in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x164b264f in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x164b4aa1 in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x16671e49 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x1666f1a0 in /usr/bin/clickhouse
17. ? @ 0x7efcecf5f609 in ?
18. __clone @ 0x7efcece84163 in ?
(version 22.3.2.1)
2022.05.06 09:01:01.303377 [ 9825 ] {e1da4a60-2788-43ea-94f1-642b6955dbc1} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2022.05.06 09:01:01.303384 [ 9825 ] {} <Debug> HTTP-Session: 7bee1174-2cac-4d72-bb15-a8262f26cef0 Destroying unnamed session of user 94309d50-4f52-5250-31bd-74fecac179db
2022.05.06 09:01:01.303809 [ 195 ] {} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Adding mutation 0000000000 for partition ffc2b90782a6ac80cd2606b219db12bb for all block numbers less than 1
2022.05.06 09:01:01.303860 [ 234 ] {} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Will check if mutation 0000000000 is done
2022.05.06 09:01:01.303870 [ 234 ] {} <Debug> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Trying to finalize 1 mutations
2022.05.06 09:01:01.305385 [ 234 ] {} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Mutation 0000000000 is done
2022.05.06 09:01:01.308236 [ 9870 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: POST, Address: 127.0.0.1:46452, User-Agent: tb-insert-chunk, Length: 450, Content Type: , Transfer Encoding: identity, X-Forwarded-For: (none)
```
Log in the second replica:
```
2022.05.06 09:01:01.288083 [ 344 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Trace> MergedBlockOutputStream: filled checksums 202205_2026_2031_1 (state Temporary)
2022.05.06 09:01:01.288437 [ 344 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Trace> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ee0155d3-b36c-48e6-ae71-90a3dc39f414): Renaming temporary part tmp_merge_202205_2026_2031_1 to 202205_2026_2031_1.
2022.05.06 09:01:01.288496 [ 344 ] {ee0155d3-b36c-48e6-ae71-90a3dc39f414::202205_2026_2031_1} <Trace> test_public_3e7de17c83d442809f79016c72ebc19f.t_ea74588f21024ac3b024641ebe98a980 (ee0155d3-b36c-48e6-ae71-90a3dc39f414) (MergerMutator): Merged 6 parts: from 202205_2026_2026_0 to 202205_2031_2031_0
2022.05.06 09:01:01.288662 [ 365 ] {} <Debug> test_public_3e7de17c83d442809f79016c72ebc19f.t_0d8fea409ba04b458b1399fdf65ce74e (e9a32087-6071-47eb-9ce7-e99d325535c0): Fetched part 2022_132_132_0 from /clickhouse/tables/01-01/test_public_3e7de17c83d442809f79016c72ebc19f.t_0d8fea409ba04b458b1399fdf65ce74e/replicas/clickhouse-01
2022.05.06 09:01:01.296685 [ 390 ] {} <Information> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Loading 1 mutation entries: 0000000000 - 0000000000
2022.05.06 09:01:01.297382 [ 390 ] {} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Adding mutation 0000000000 for partition ffc2b90782a6ac80cd2606b219db12bb for all block numbers less than 1
2022.05.06 09:01:01.297442 [ 430 ] {} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Will check if mutation 0000000000 is done
2022.05.06 09:01:01.297451 [ 430 ] {} <Debug> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Trying to finalize 1 mutations
2022.05.06 09:01:01.299869 [ 344 ] {} <Debug> MemoryTracker: Peak memory usage Mutate/Merge: 4.06 MiB.
2022.05.06 09:01:01.302271 [ 430 ] {} <Trace> d_test_815b7009495344f7ab177b17c6250b5c.t_afc78aa78e174881a648c09695b6f436_replaced_t_02ca728dfed941c28d362a7fd74385b0 (ReplicatedMergeTreeQueue): Mutation 0000000000 is done
2022.05.06 09:01:01.317085 [ 508 ] {} <Debug> DDLWorker: Scheduling tasks
2022.05.06 09:01:01.317089 [ 509 ] {} <Trace> DDLWorker: Too early to clean queue, will do it later.
2022.05.06 09:01:01.323894 [ 508 ] {} <Trace> DDLWorker: scheduleTasks: initialized=true, size_before_filtering=644, queue_size=644, entries=query-0000000000..query-0000000643, first_failed_task_name=none, current_tasks_size=1, last_current_task=query-0000000642, last_skipped_entry_name=none
```
I don't know if it's related, but I find it odd that `StorageReplicatedMergeTree::waitMutation` tries to set up the current replica as the first one to check but the logs say it's doing the check over the other replica instead (first `clickhouse-02`, then `clickhouse-01`).
```
$ egrep "Waiting for clickhouse.*to apply mutation" clickhouse-server-*.log | tail
clickhouse-server-1.log:2022.05.06 09:04:03.304003 [ 20287 ] {df7e366f-f5c9-4fd8-b16b-1d9022bfc946} <Debug> d_test_5c90353d4a684cf39731ffdbc0acc690.t_a40167a708f44058910a9aba12181e51_replaced_t_3848d2f366704d7785856f145782f2d2 (c9fad2c2-3df8-4d5e-abe1-180b57d4a25c): Waiting for clickhouse-02 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:03.315757 [ 20287 ] {df7e366f-f5c9-4fd8-b16b-1d9022bfc946} <Debug> d_test_5c90353d4a684cf39731ffdbc0acc690.t_a40167a708f44058910a9aba12181e51_replaced_t_3848d2f366704d7785856f145782f2d2 (c9fad2c2-3df8-4d5e-abe1-180b57d4a25c): Waiting for clickhouse-01 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:07.111596 [ 19391 ] {a2411e67-2e86-423e-8402-236d228df988} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_37c6a4091c5f46b59268c75d624029c3_replaced_t_52f26a10a6c748ae83a0c377a1d98a73 (76b91791-f1cc-4a38-8c24-7bbbdce2f738): Waiting for clickhouse-02 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:07.140792 [ 19391 ] {a2411e67-2e86-423e-8402-236d228df988} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_37c6a4091c5f46b59268c75d624029c3_replaced_t_52f26a10a6c748ae83a0c377a1d98a73 (76b91791-f1cc-4a38-8c24-7bbbdce2f738): Waiting for clickhouse-01 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:10.241893 [ 19391 ] {72a5fa63-1138-413c-af87-49ad12eb8f43} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_cc363d4dd26a40deaf54c9bb04db36b5_replaced_t_b2816294fccd42169093ece32c157569 (deb5a762-f20a-411f-a3d4-c050b0dfb541): Waiting for clickhouse-02 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:10.257769 [ 19391 ] {72a5fa63-1138-413c-af87-49ad12eb8f43} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_cc363d4dd26a40deaf54c9bb04db36b5_replaced_t_b2816294fccd42169093ece32c157569 (deb5a762-f20a-411f-a3d4-c050b0dfb541): Waiting for clickhouse-01 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:13.460351 [ 19391 ] {de6ca97b-74db-493a-8ce3-ae202cb740a0} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_ea82ddaa34fe48fb9a8b73667a99c83b_replaced_t_95ee3204e3d14330aa8909fe62e6ceb0 (bf906c86-e1b4-40f5-a536-b6a9f37efa61): Waiting for clickhouse-02 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:13.472835 [ 19391 ] {de6ca97b-74db-493a-8ce3-ae202cb740a0} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_ea82ddaa34fe48fb9a8b73667a99c83b_replaced_t_95ee3204e3d14330aa8909fe62e6ceb0 (bf906c86-e1b4-40f5-a536-b6a9f37efa61): Waiting for clickhouse-01 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:16.617920 [ 19391 ] {593ba3d2-3c5f-4b53-8274-d35c8cc28cdf} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_d07f9b4f7b784e2a8abea4effe90dbdf_replaced_t_3b83ca40996b443b90a6ebe7b2947035 (4c4aa3b7-67cd-43a7-976a-6e18554f869f): Waiting for clickhouse-02 to apply mutation 0000000000
clickhouse-server-1.log:2022.05.06 09:04:16.629736 [ 19391 ] {593ba3d2-3c5f-4b53-8274-d35c8cc28cdf} <Debug> d_test_cb6415836ba34f3c96d29ebdf9159b70.t_d07f9b4f7b784e2a8abea4effe90dbdf_replaced_t_3b83ca40996b443b90a6ebe7b2947035 (4c4aa3b7-67cd-43a7-976a-6e18554f869f): Waiting for clickhouse-01 to apply mutation 0000000000
```
I've only seen this once (over hundreds on runs) so it might not be related to `distributed_ddl.pool_size` and instead might be a different bug.
Any ideas on how to reproduce / debug this further are really appreciated. | https://github.com/ClickHouse/ClickHouse/issues/36966 | https://github.com/ClickHouse/ClickHouse/pull/39900 | f08952b74c7bcfa1dbbde2ccd292da85e145f7ea | f52b6748db0c5076ae3a53390d75dbc71aa88b6a | "2022-05-06T11:59:24Z" | c++ | "2022-08-06T10:09:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,894 | ["tests/queries/0_stateless/02302_lc_nullable_string_insert_as_number.reference", "tests/queries/0_stateless/02302_lc_nullable_string_insert_as_number.sql"] | `ColumnLowCardinality cannot be inside Nullable column` error if insert values `int` instead of `string` | ```sql
21.8.13.6
CREATE TABLE a(`c1` LowCardinality(Nullable(String)) DEFAULT CAST(NULL, 'LowCardinality(Nullable(String))'))
ENGINE = Memory;
INSERT INTO a (c1) FORMAT Values (0);
select * from a;
ββc1ββ
β 0 β
ββββββ
```
```sql
22.3.4
CREATE TABLE a(`c1` LowCardinality(Nullable(String)) DEFAULT CAST(NULL, 'LowCardinality(Nullable(String))'))
ENGINE = Memory;
INSERT INTO a (c1) FORMAT Values (0);
Error on processing query: Code: 44. DB::Exception: ColumnLowCardinality cannot be inside Nullable column: while ex...
INSERT INTO a (c1) FORMAT Values ('0');
ok
```
```sql
22.4.4.7
CREATE TABLE a(`c1` LowCardinality(Nullable(String)) DEFAULT CAST(NULL, 'LowCardinality(Nullable(String))'))
ENGINE = Memory;
INSERT INTO a (c1) FORMAT Values (0);
select * from a;
ββc1ββ
β 0 β
ββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/36894 | https://github.com/ClickHouse/ClickHouse/pull/51274 | 3a170c297a61e2a7957dda104a3eed0f77f129a1 | 41783c47d1fead9e8094dc460038e6571de1e83a | "2022-05-04T00:00:38Z" | c++ | "2023-07-04T22:59:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,810 | ["PreLoad.cmake"] | Unable to build ClickHouse from source on macOS Monterey | > Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.com/docs/en/development/build/
- [x] git diff is empty
- [x] Pulled fresh master
- [x] Cleaned `cmake` cache
- [x] Re-reviewed build instructions for Max OSX multiple times
- [x] git submodules re-init and up to date
**Operating system**
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
Mac OS Monterey, version 12.3.1
**Cmake version**
3.23.1
**Ninja version**
1.10.2
**Compiler name and version**
```
#>> $(brew --prefix llvm)/bin/clang --version
Homebrew clang version 13.0.1
Target: x86_64-apple-darwin21.4.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin
```
```
#>> $(brew --prefix llvm)/bin/clang++ --version
Homebrew clang version 13.0.1
Target: x86_64-apple-darwin21.4.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin
```
**Full cmake and/or ninja output**
```
#>> cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER=$(brew --prefix llvm)/bin/clang++ -DCMAKE_AR=$(brew --prefix llvm)/bin/llvm-ar -DCMAKE_RANLIB=$(brew --prefix llvm)/bin/llvm-ranlib -DOBJCOPY_PATH=$(brew --prefix llvm)/bin/llvm-objcopy -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
-- The C compiler identification is Clang 13.0.1
-- The CXX compiler identification is Clang 13.0.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/opt/llvm/bin/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/local/opt/llvm/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /usr/local/opt/llvm/bin/clang
Homebrew clang version 13.0.1
Target: x86_64-apple-darwin21.4.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin
-- Using objcopy: /usr/local/opt/llvm/bin/llvm-objcopy
-- Using strip: /usr/bin/strip
-- Found Git: /usr/bin/git (found version "2.30.1 (Apple Git-130)")
-- HEAD's commit hash 7dc084419e16c52650088a3ae948850d94c4edf5
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
(commit or discard the untracked or modified content in submodules)
modified: contrib/rapidjson (untracked content)
no changes added to commit (use "git add" and/or "git commit -a")
CMake Warning at cmake/ccache.cmake:17 (message):
CCache is not found. We recommend setting it up if you build ClickHouse
from source often. Setting it up will significantly reduce compilation
time for 2nd and consequent builds
Call Stack (most recent call first):
CMakeLists.txt:70 (include)
-- CMAKE_BUILD_TYPE: RelWithDebInfo
-- Performing Test HAS_RESERVED_IDENTIFIER
-- Performing Test HAS_RESERVED_IDENTIFIER - Success
-- Performing Test HAS_SUGGEST_DESTRUCTOR_OVERRIDE
-- Performing Test HAS_SUGGEST_DESTRUCTOR_OVERRIDE - Success
-- Performing Test HAS_SHADOW
-- Performing Test HAS_SHADOW - Success
-- Performing Test HAS_SUGGEST_OVERRIDE
-- Performing Test HAS_SUGGEST_OVERRIDE - Success
-- Performing Test HAS_USE_CTOR_HOMING
-- Performing Test HAS_USE_CTOR_HOMING - Success
-- Performing Test HAVE_SSSE3
-- Performing Test HAVE_SSSE3 - Success
-- Performing Test HAVE_SSE41
-- Performing Test HAVE_SSE41 - Success
-- Performing Test HAVE_SSE42
-- Performing Test HAVE_SSE42 - Success
-- Performing Test HAVE_PCLMULQDQ
-- Performing Test HAVE_PCLMULQDQ - Success
-- Performing Test HAVE_POPCNT
-- Performing Test HAVE_POPCNT - Success
-- Performing Test HAVE_AVX
-- Performing Test HAVE_AVX - Success
-- Performing Test HAVE_AVX2
-- Performing Test HAVE_AVX2 - Success
-- Performing Test HAVE_AVX512
-- Performing Test HAVE_AVX512 - Success
-- Performing Test HAVE_BMI
-- Performing Test HAVE_BMI - Success
-- Default libraries: -nodefaultlibs -lc -lm -lpthread -ldl
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Unit tests are enabled
-- Building for: Darwin-21.4.0 x86_64 ;
USE_STATIC_LIBRARIES=ON
SPLIT_SHARED=
CCACHE=CCACHE_FOUND-NOTFOUND
-- Adding contrib module miniselect (configuring with miniselect-cmake)
-- Adding contrib module pdqsort (configuring with pdqsort-cmake)
-- Adding contrib module sparsehash-c11 (configuring with sparsehash-c11-cmake)
-- Adding contrib module abseil-cpp (configuring with abseil-cpp-cmake)
-- Adding contrib module magic_enum (configuring with magic-enum-cmake)
-- Adding contrib module boost (configuring with boost-cmake)
-- Adding contrib module cctz (configuring with cctz-cmake)
-- Packaging with tzdata version: 2021c
-- Adding contrib module consistent-hashing (configuring with consistent-hashing)
-- Adding contrib module dragonbox (configuring with dragonbox-cmake)
-- Adding contrib module hyperscan (configuring with hyperscan-cmake)
-- Adding contrib module jemalloc (configuring with jemalloc-cmake)
CMake Warning at contrib/jemalloc-cmake/CMakeLists.txt:20 (message):
jemalloc support on non-linux is EXPERIMENTAL
-- jemalloc malloc_conf: oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000
-- Adding contrib module libcpuid (configuring with libcpuid-cmake)
-- Adding contrib module libdivide (configuring with libdivide)
-- Adding contrib module libmetrohash (configuring with libmetrohash)
-- Adding contrib module lz4 (configuring with lz4-cmake)
-- Adding contrib module murmurhash (configuring with murmurhash)
-- Adding contrib module replxx (configuring with replxx-cmake)
-- Adding contrib module unixodbc (configuring with unixodbc-cmake)
-- ODBC is only supported on Linux
-- Not using ODBC
-- Adding contrib module nanodbc (configuring with nanodbc-cmake)
-- Adding contrib module capnproto (configuring with capnproto-cmake)
-- Adding contrib module yaml-cpp (configuring with yaml-cpp-cmake)
-- Adding contrib module re2 (configuring with re2-cmake)
-- Adding contrib module xz (configuring with xz-cmake)
-- Adding contrib module brotli (configuring with brotli-cmake)
-- Adding contrib module double-conversion (configuring with double-conversion-cmake)
-- Adding contrib module boringssl (configuring with boringssl-cmake)
-- Adding contrib module poco (configuring with poco-cmake)
-- Using Poco::Crypto
-- Not using Poco::Data::ODBC
-- Adding contrib module croaring (configuring with croaring-cmake)
-- Adding contrib module zstd (configuring with zstd-cmake)
-- ZSTD VERSION 1.5.0
-- Adding contrib module zlib-ng (configuring with zlib-ng-cmake)
-- Adding contrib module bzip2 (configuring with bzip2-cmake)
-- Adding contrib module minizip-ng (configuring with minizip-ng-cmake)
-- Adding contrib module snappy (configuring with snappy-cmake)
-- Adding contrib module rocksdb (configuring with rocksdb-cmake)
-- Performing Test HAVE_FALLOCATE
-- Performing Test HAVE_FALLOCATE - Failed
-- Performing Test HAVE_SYNC_FILE_RANGE_WRITE
-- Performing Test HAVE_SYNC_FILE_RANGE_WRITE - Failed
-- Performing Test HAVE_PTHREAD_MUTEX_ADAPTIVE_NP
-- Performing Test HAVE_PTHREAD_MUTEX_ADAPTIVE_NP - Failed
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - not found
-- Adding contrib module thrift (configuring with thrift-cmake)
-- Looking for arpa/inet.h
-- Looking for arpa/inet.h - found
-- Looking for fcntl.h
-- Looking for fcntl.h - found
-- Looking for getopt.h
-- Looking for getopt.h - found
-- Looking for inttypes.h
-- Looking for inttypes.h - found
-- Looking for netdb.h
-- Looking for netdb.h - found
-- Looking for netinet/in.h
-- Looking for netinet/in.h - found
-- Looking for signal.h
-- Looking for signal.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for unistd.h
-- Looking for unistd.h - found
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for sys/ioctl.h
-- Looking for sys/ioctl.h - found
-- Looking for sys/param.h
-- Looking for sys/param.h - found
-- Looking for sys/resource.h
-- Looking for sys/resource.h - found
-- Looking for sys/socket.h
-- Looking for sys/socket.h - found
-- Looking for sys/stat.h
-- Looking for sys/stat.h - found
-- Looking for sys/time.h
-- Looking for sys/time.h - found
-- Looking for sys/un.h
-- Looking for sys/un.h - found
-- Looking for poll.h
-- Looking for poll.h - found
-- Looking for sys/poll.h
-- Looking for sys/poll.h - found
-- Looking for sys/select.h
-- Looking for sys/select.h - found
-- Looking for sched.h
-- Looking for sched.h - found
-- Looking for string.h
-- Looking for string.h - found
-- Looking for strings.h
-- Looking for strings.h - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for gethostbyname_r
-- Looking for gethostbyname_r - not found
-- Looking for strerror_r
-- Looking for strerror_r - found
-- Looking for sched_get_priority_max
-- Looking for sched_get_priority_max - found
-- Looking for sched_get_priority_min
-- Looking for sched_get_priority_min - found
-- Performing Test STRERROR_R_CHAR_P
-- Performing Test STRERROR_R_CHAR_P - Failed
-- Adding contrib module arrow (configuring with arrow-cmake)
-- Looking for strtof_l
-- Looking for strtof_l - not found
-- Looking for strtoull_l
-- Looking for strtoull_l - not found
-- CMAKE_CXX_FLAGS: -fdiagnostics-color=always -Xclang -fuse-ctor-homing -fsized-deallocation -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/repos/haggy/ClickHouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -stdlib=libc++ -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -w -std=c++11 -Wall -pedantic -Werror -Wextra -Wno-unused-parameter -stdlib=libc++
-- Proceeding with version: 1.12.0.372
-- Adding contrib module avro (configuring with avro-cmake)
-- Adding contrib module protobuf (configuring with protobuf-cmake)
-- Adding contrib module openldap (configuring with openldap-cmake)
-- Adding contrib module grpc (configuring with grpc-cmake)
-- Performing Test IOS
-- Performing Test IOS - Failed
-- Performing Test IOS_V10
-- Performing Test IOS_V10 - Failed
-- Performing Test MACOS_V1012
-- Performing Test MACOS_V1012 - Success
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - not found
-- Looking for include file sys/types.h
-- Looking for include file sys/types.h - found
-- Looking for include file arpa/nameser_compat.h
-- Looking for include file arpa/nameser_compat.h - found
-- Looking for include file arpa/nameser.h
-- Looking for include file arpa/nameser.h - found
-- Looking for include file assert.h
-- Looking for include file assert.h - found
-- Looking for include file errno.h
-- Looking for include file errno.h - found
-- Looking for include file limits.h
-- Looking for include file limits.h - found
-- Looking for include file malloc.h
-- Looking for include file malloc.h - not found
-- Looking for include file memory.h
-- Looking for include file memory.h - found
-- Looking for include file netinet/tcp.h
-- Looking for include file netinet/tcp.h - found
-- Looking for include file net/if.h
-- Looking for include file net/if.h - found
-- Looking for include file socket.h
-- Looking for include file socket.h - not found
-- Looking for include file stdbool.h
-- Looking for include file stdbool.h - found
-- Looking for include file stdlib.h
-- Looking for include file stdlib.h - found
-- Looking for include file stropts.h
-- Looking for include file stropts.h - not found
-- Looking for include file sys/uio.h
-- Looking for include file sys/uio.h - found
-- Looking for include file time.h
-- Looking for include file time.h - found
-- Looking for include file dlfcn.h
-- Looking for include file dlfcn.h - found
-- Looking for include files winsock2.h, windows.h
-- Looking for include files winsock2.h, windows.h - not found
-- Looking for 3 include files winsock2.h, ..., windows.h
-- Looking for 3 include files winsock2.h, ..., windows.h - not found
-- Looking for include files winsock.h, windows.h
-- Looking for include files winsock.h, windows.h - not found
-- Looking for include file windows.h
-- Looking for include file windows.h - not found
-- Performing Test HAVE_SOCKLEN_T
-- Performing Test HAVE_SOCKLEN_T - Success
-- Performing Test HAVE_TYPE_SOCKET
-- Performing Test HAVE_TYPE_SOCKET - Failed
-- Performing Test HAVE_BOOL_T
-- Performing Test HAVE_BOOL_T - Success
-- Performing Test HAVE_SSIZE_T
-- Performing Test HAVE_SSIZE_T - Success
-- Performing Test HAVE_LONGLONG
-- Performing Test HAVE_LONGLONG - Success
-- Performing Test HAVE_SIG_ATOMIC_T
-- Performing Test HAVE_SIG_ATOMIC_T - Success
-- Performing Test HAVE_STRUCT_ADDRINFO
-- Performing Test HAVE_STRUCT_ADDRINFO - Success
-- Performing Test HAVE_STRUCT_IN6_ADDR
-- Performing Test HAVE_STRUCT_IN6_ADDR - Success
-- Performing Test HAVE_STRUCT_SOCKADDR_IN6
-- Performing Test HAVE_STRUCT_SOCKADDR_IN6 - Success
-- Performing Test HAVE_STRUCT_SOCKADDR_STORAGE
-- Performing Test HAVE_STRUCT_SOCKADDR_STORAGE - Success
-- Performing Test HAVE_STRUCT_TIMEVAL
-- Performing Test HAVE_STRUCT_TIMEVAL - Success
-- Looking for AF_INET6
-- Looking for AF_INET6 - found
-- Looking for O_NONBLOCK
-- Looking for O_NONBLOCK - found
-- Looking for FIONBIO
-- Looking for FIONBIO - found
-- Looking for SIOCGIFADDR
-- Looking for SIOCGIFADDR - found
-- Looking for MSG_NOSIGNAL
-- Looking for MSG_NOSIGNAL - found
-- Looking for PF_INET6
-- Looking for PF_INET6 - found
-- Looking for SO_NONBLOCK
-- Looking for SO_NONBLOCK - not found
-- Looking for CLOCK_MONOTONIC
-- Looking for CLOCK_MONOTONIC - found
-- Performing Test HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID
-- Performing Test HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID - Success
-- Performing Test HAVE_LL
-- Performing Test HAVE_LL - Success
-- Looking for bitncmp
-- Looking for bitncmp - not found
-- Looking for closesocket
-- Looking for closesocket - not found
-- Looking for CloseSocket
-- Looking for CloseSocket - not found
-- Looking for connect
-- Looking for connect - found
-- Looking for fcntl
-- Looking for fcntl - found
-- Looking for freeaddrinfo
-- Looking for freeaddrinfo - found
-- Looking for getaddrinfo
-- Looking for getaddrinfo - found
-- Looking for getenv
-- Looking for getenv - found
-- Looking for gethostbyaddr
-- Looking for gethostbyaddr - found
-- Looking for gethostname
-- Looking for gethostname - found
-- Looking for getnameinfo
-- Looking for getnameinfo - found
-- Looking for getservbyport_r
-- Looking for getservbyport_r - not found
-- Looking for gettimeofday
-- Looking for gettimeofday - found
-- Looking for if_indextoname
-- Looking for if_indextoname - found
-- Looking for inet_net_pton
-- Looking for inet_net_pton - found
-- Looking for inet_ntop
-- Looking for inet_ntop - found
-- Looking for inet_pton
-- Looking for inet_pton - found
-- Looking for ioctl
-- Looking for ioctl - found
-- Looking for ioctlsocket
-- Looking for ioctlsocket - not found
-- Looking for IoctlSocket
-- Looking for IoctlSocket - not found
-- Looking for recv
-- Looking for recv - found
-- Looking for recvfrom
-- Looking for recvfrom - found
-- Looking for send
-- Looking for send - found
-- Looking for setsockopt
-- Looking for setsockopt - found
-- Looking for socket
-- Looking for socket - found
-- Looking for strcasecmp
-- Looking for strcasecmp - found
-- Looking for strcmpi
-- Looking for strcmpi - not found
-- Looking for strdup
-- Looking for strdup - found
-- Looking for stricmp
-- Looking for stricmp - not found
-- Looking for strncasecmp
-- Looking for strncasecmp - found
-- Looking for strncmpi
-- Looking for strncmpi - not found
-- Looking for strnicmp
-- Looking for strnicmp - not found
-- Looking for writev
-- Looking for writev - found
-- Looking for __system_property_get
-- Looking for __system_property_get - not found
-- Adding contrib module msgpack-c (configuring with msgpack-c-cmake)
-- Adding contrib module cityhash102 (configuring with cityhash102)
-- Adding contrib module libfarmhash (configuring with libfarmhash)
-- Adding contrib module icu (configuring with icu-cmake)
-- Not using icu
-- Adding contrib module h3 (configuring with h3-cmake)
-- Adding contrib module mariadb-connector-c (configuring with mariadb-connector-c-cmake)
-- Build without mysqlclient (support for MYSQL dictionary source will be disabled)
-- Adding contrib module googletest (configuring with googletest-cmake)
-- Adding contrib module llvm (configuring with llvm-cmake)
-- Not using LLVM
-- Adding contrib module libxml2 (configuring with libxml2-cmake)
-- Adding contrib module aws;aws-c-common;aws-c-event-stream;aws-checksums (configuring with aws-s3-cmake)
-- Adding contrib module base64 (configuring with base64-cmake)
-- Adding contrib module simdjson (configuring with simdjson-cmake)
-- Adding contrib module rapidjson (configuring with rapidjson-cmake)
-- Adding contrib module fastops (configuring with fastops-cmake)
-- Not using fast vectorized mathematical functions library by Mikhail Parakhin
-- Adding contrib module libuv (configuring with libuv-cmake)
-- Adding contrib module AMQP-CPP (configuring with amqpcpp-cmake)
-- Adding contrib module cassandra (configuring with cassandra-cmake)
-- Adding contrib module fmtlib (configuring with fmtlib-cmake)
-- Adding contrib module krb5 (configuring with krb5-cmake)
-- Adding contrib module cyrus-sasl (configuring with cyrus-sasl-cmake)
-- Adding contrib module libgsasl (configuring with libgsasl-cmake)
-- Adding contrib module librdkafka (configuring with librdkafka-cmake)
-- librdkafka with SASL support
-- librdkafka with SSL support
-- Adding contrib module libhdfs3 (configuring with libhdfs3-cmake)
-- Not using hdfs
-- Adding contrib module hive-metastore (configuring with hive-metastore-cmake)
Hive disabled
-- Adding contrib module cppkafka (configuring with cppkafka-cmake)
-- Adding contrib module libpqxx (configuring with libpqxx-cmake)
-- Adding contrib module libpq (configuring with libpq-cmake)
-- Adding contrib module NuRaft (configuring with nuraft-cmake)
-- Adding contrib module fast_float (configuring with fast_float-cmake)
-- Adding contrib module datasketches-cpp (configuring with datasketches-cpp-cmake)
-- Adding contrib module libstemmer_c (configuring with libstemmer-c-cmake)
-- Adding contrib module wordnet-blast (configuring with wordnet-blast-cmake)
-- Adding contrib module lemmagen-c (configuring with lemmagen-c-cmake)
-- Adding contrib module nlp-data (configuring with nlp-data-cmake)
-- Adding contrib module cld2 (configuring with cld2-cmake)
-- Adding contrib module sqlite-amalgamation (configuring with sqlite-cmake)
-- Adding contrib module s2geometry (configuring with s2geometry-cmake)
-- Performing Test SUPPORTS_CXXFLAG_frame_larger_than=65536
-- Performing Test SUPPORTS_CXXFLAG_frame_larger_than=65536 - Success
-- Performing Test SUPPORTS_CFLAG_frame_larger_than=65536
-- Performing Test SUPPORTS_CFLAG_frame_larger_than=65536 - Success
-- Performing Test SUPPORTS_CXXFLAG_pedantic
-- Performing Test SUPPORTS_CXXFLAG_pedantic - Success
-- Performing Test SUPPORTS_CFLAG_pedantic
-- Performing Test SUPPORTS_CFLAG_pedantic - Success
-- Performing Test SUPPORTS_CXXFLAG_no_vla_extension
-- Performing Test SUPPORTS_CXXFLAG_no_vla_extension - Success
-- Performing Test SUPPORTS_CFLAG_no_vla_extension
-- Performing Test SUPPORTS_CFLAG_no_vla_extension - Success
-- Performing Test SUPPORTS_CXXFLAG_no_zero_length_array
-- Performing Test SUPPORTS_CXXFLAG_no_zero_length_array - Success
-- Performing Test SUPPORTS_CFLAG_no_zero_length_array
-- Performing Test SUPPORTS_CFLAG_no_zero_length_array - Success
-- Performing Test SUPPORTS_CXXFLAG_no_c11_extensions
-- Performing Test SUPPORTS_CXXFLAG_no_c11_extensions - Success
-- Performing Test SUPPORTS_CFLAG_no_c11_extensions
-- Performing Test SUPPORTS_CFLAG_no_c11_extensions - Success
-- Performing Test SUPPORTS_CXXFLAG_everything
-- Performing Test SUPPORTS_CXXFLAG_everything - Success
-- Performing Test SUPPORTS_CFLAG_everything
-- Performing Test SUPPORTS_CFLAG_everything - Success
-- Performing Test SUPPORTS_CXXFLAG_no_cxx98_compat_pedantic
-- Performing Test SUPPORTS_CXXFLAG_no_cxx98_compat_pedantic - Success
-- Performing Test SUPPORTS_CFLAG_no_cxx98_compat_pedantic
-- Performing Test SUPPORTS_CFLAG_no_cxx98_compat_pedantic - Success
-- Performing Test SUPPORTS_CXXFLAG_no_cxx98_compat
-- Performing Test SUPPORTS_CXXFLAG_no_cxx98_compat - Success
-- Performing Test SUPPORTS_CFLAG_no_cxx98_compat
-- Performing Test SUPPORTS_CFLAG_no_cxx98_compat - Success
-- Performing Test SUPPORTS_CXXFLAG_no_c99_extensions
-- Performing Test SUPPORTS_CXXFLAG_no_c99_extensions - Success
-- Performing Test SUPPORTS_CFLAG_no_c99_extensions
-- Performing Test SUPPORTS_CFLAG_no_c99_extensions - Success
-- Performing Test SUPPORTS_CXXFLAG_no_conversion
-- Performing Test SUPPORTS_CXXFLAG_no_conversion - Success
-- Performing Test SUPPORTS_CFLAG_no_conversion
-- Performing Test SUPPORTS_CFLAG_no_conversion - Success
-- Performing Test SUPPORTS_CXXFLAG_no_ctad_maybe_unsupported
-- Performing Test SUPPORTS_CXXFLAG_no_ctad_maybe_unsupported - Success
-- Performing Test SUPPORTS_CFLAG_no_ctad_maybe_unsupported
-- Performing Test SUPPORTS_CFLAG_no_ctad_maybe_unsupported - Success
-- Performing Test SUPPORTS_CXXFLAG_no_deprecated_dynamic_exception_spec
-- Performing Test SUPPORTS_CXXFLAG_no_deprecated_dynamic_exception_spec - Success
-- Performing Test SUPPORTS_CFLAG_no_deprecated_dynamic_exception_spec
-- Performing Test SUPPORTS_CFLAG_no_deprecated_dynamic_exception_spec - Success
-- Performing Test SUPPORTS_CXXFLAG_no_disabled_macro_expansion
-- Performing Test SUPPORTS_CXXFLAG_no_disabled_macro_expansion - Success
-- Performing Test SUPPORTS_CFLAG_no_disabled_macro_expansion
-- Performing Test SUPPORTS_CFLAG_no_disabled_macro_expansion - Success
-- Performing Test SUPPORTS_CXXFLAG_no_documentation_unknown_command
-- Performing Test SUPPORTS_CXXFLAG_no_documentation_unknown_command - Success
-- Performing Test SUPPORTS_CFLAG_no_documentation_unknown_command
-- Performing Test SUPPORTS_CFLAG_no_documentation_unknown_command - Success
-- Performing Test SUPPORTS_CXXFLAG_no_double_promotion
-- Performing Test SUPPORTS_CXXFLAG_no_double_promotion - Success
-- Performing Test SUPPORTS_CFLAG_no_double_promotion
-- Performing Test SUPPORTS_CFLAG_no_double_promotion - Success
-- Performing Test SUPPORTS_CXXFLAG_no_exit_time_destructors
-- Performing Test SUPPORTS_CXXFLAG_no_exit_time_destructors - Success
-- Performing Test SUPPORTS_CFLAG_no_exit_time_destructors
-- Performing Test SUPPORTS_CFLAG_no_exit_time_destructors - Success
-- Performing Test SUPPORTS_CXXFLAG_no_float_equal
-- Performing Test SUPPORTS_CXXFLAG_no_float_equal - Success
-- Performing Test SUPPORTS_CFLAG_no_float_equal
-- Performing Test SUPPORTS_CFLAG_no_float_equal - Success
-- Performing Test SUPPORTS_CXXFLAG_no_global_constructors
-- Performing Test SUPPORTS_CXXFLAG_no_global_constructors - Success
-- Performing Test SUPPORTS_CFLAG_no_global_constructors
-- Performing Test SUPPORTS_CFLAG_no_global_constructors - Success
-- Performing Test SUPPORTS_CXXFLAG_no_missing_prototypes
-- Performing Test SUPPORTS_CXXFLAG_no_missing_prototypes - Success
-- Performing Test SUPPORTS_CFLAG_no_missing_prototypes
-- Performing Test SUPPORTS_CFLAG_no_missing_prototypes - Success
-- Performing Test SUPPORTS_CXXFLAG_no_missing_variable_declarations
-- Performing Test SUPPORTS_CXXFLAG_no_missing_variable_declarations - Success
-- Performing Test SUPPORTS_CFLAG_no_missing_variable_declarations
-- Performing Test SUPPORTS_CFLAG_no_missing_variable_declarations - Success
-- Performing Test SUPPORTS_CXXFLAG_no_nested_anon_types
-- Performing Test SUPPORTS_CXXFLAG_no_nested_anon_types - Success
-- Performing Test SUPPORTS_CFLAG_no_nested_anon_types
-- Performing Test SUPPORTS_CFLAG_no_nested_anon_types - Success
-- Performing Test SUPPORTS_CXXFLAG_no_packed
-- Performing Test SUPPORTS_CXXFLAG_no_packed - Success
-- Performing Test SUPPORTS_CFLAG_no_packed
-- Performing Test SUPPORTS_CFLAG_no_packed - Success
-- Performing Test SUPPORTS_CXXFLAG_no_padded
-- Performing Test SUPPORTS_CXXFLAG_no_padded - Success
-- Performing Test SUPPORTS_CFLAG_no_padded
-- Performing Test SUPPORTS_CFLAG_no_padded - Success
-- Performing Test SUPPORTS_CXXFLAG_no_return_std_move_in_cxx11
-- Performing Test SUPPORTS_CXXFLAG_no_return_std_move_in_cxx11 - Failed
-- Performing Test SUPPORTS_CFLAG_no_return_std_move_in_cxx11
-- Performing Test SUPPORTS_CFLAG_no_return_std_move_in_cxx11 - Failed
-- Flag -Wno-return-std-move-in-c++11 is unsupported
-- Flag -Wno-return-std-move-in-c++11 is unsupported
-- Performing Test SUPPORTS_CXXFLAG_no_shift_sign_overflow
-- Performing Test SUPPORTS_CXXFLAG_no_shift_sign_overflow - Success
-- Performing Test SUPPORTS_CFLAG_no_shift_sign_overflow
-- Performing Test SUPPORTS_CFLAG_no_shift_sign_overflow - Success
-- Performing Test SUPPORTS_CXXFLAG_no_sign_conversion
-- Performing Test SUPPORTS_CXXFLAG_no_sign_conversion - Success
-- Performing Test SUPPORTS_CFLAG_no_sign_conversion
-- Performing Test SUPPORTS_CFLAG_no_sign_conversion - Success
-- Performing Test SUPPORTS_CXXFLAG_no_switch_enum
-- Performing Test SUPPORTS_CXXFLAG_no_switch_enum - Success
-- Performing Test SUPPORTS_CFLAG_no_switch_enum
-- Performing Test SUPPORTS_CFLAG_no_switch_enum - Success
-- Performing Test SUPPORTS_CXXFLAG_no_undefined_func_template
-- Performing Test SUPPORTS_CXXFLAG_no_undefined_func_template - Success
-- Performing Test SUPPORTS_CFLAG_no_undefined_func_template
-- Performing Test SUPPORTS_CFLAG_no_undefined_func_template - Success
-- Performing Test SUPPORTS_CXXFLAG_no_unused_template
-- Performing Test SUPPORTS_CXXFLAG_no_unused_template - Success
-- Performing Test SUPPORTS_CFLAG_no_unused_template
-- Performing Test SUPPORTS_CFLAG_no_unused_template - Success
-- Performing Test SUPPORTS_CXXFLAG_no_vla
-- Performing Test SUPPORTS_CXXFLAG_no_vla - Success
-- Performing Test SUPPORTS_CFLAG_no_vla
-- Performing Test SUPPORTS_CFLAG_no_vla - Success
-- Performing Test SUPPORTS_CXXFLAG_no_weak_template_vtables
-- Performing Test SUPPORTS_CXXFLAG_no_weak_template_vtables - Success
-- Performing Test SUPPORTS_CFLAG_no_weak_template_vtables
-- Performing Test SUPPORTS_CFLAG_no_weak_template_vtables - Success
-- Performing Test SUPPORTS_CXXFLAG_no_weak_vtables
-- Performing Test SUPPORTS_CXXFLAG_no_weak_vtables - Success
-- Performing Test SUPPORTS_CFLAG_no_weak_vtables
-- Performing Test SUPPORTS_CFLAG_no_weak_vtables - Success
-- compiler C = /usr/local/opt/llvm/bin/clang -I/usr/local/opt/zlib/include -I/usr/local/opt/lbzip2/include -I/usr/local/opt/bzip2/include -I/usr/local/opt/sqlite/include -I/usr/local/opt/readline/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/snappy/include -I/usr/local/opt/libmemcached/include -I/usr/local/opt/[email protected]/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include -fdiagnostics-color=always -Xclang -fuse-ctor-homing -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=./repos/haggy/ClickHouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fexperimental-new-pass-manager -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O2 -g -DNDEBUG -O3 -g -gdwarf-4
-- compiler CXX = /usr/local/opt/llvm/bin/clang++ -fdiagnostics-color=always -Xclang -fuse-ctor-homing -fsized-deallocation -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=./repos/haggy/ClickHouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -stdlib=libc++ -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O2 -g -DNDEBUG -O3 -g -gdwarf-4
-- LINKER_FLAGS = -L/usr/local/opt/zlib/lib -L/usr/local/opt/lbzip2/lib -L/usr/local/opt/bzip2/lib -L/usr/local/opt/sqlite/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/snappy/lib -L/usr/local/opt/libmemcached/lib -L/usr/local/opt/[email protected]/lib -L/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/lib -rdynamic -Wl,-U,_inside_main
-- ./repos/haggy/ClickHouse/src: Have 40278 megabytes of memory.
Limiting concurrent linkers jobs to 11 and compiler jobs to 16 (system has 16 logical cores)
-- Will build ClickHouse 22.5.1.1 revision 54462
INFONot generating debugger info for ClickHouse functions
-- StorageFileLog is only supported on Linux
-- ClickHouse modes:
-- Server mode: ON
-- Client mode: ON
-- Local mode: ON
-- Benchmark mode: ON
-- Extract from config mode: ON
-- Compressor mode: ON
-- Copier mode: ON
-- Format mode: ON
-- Obfuscator mode: ON
-- ODBC bridge mode: OFF
-- Library bridge mode: ON
-- ClickHouse install: ON
-- ClickHouse git-import: ON
-- ClickHouse keeper mode: ON
-- ClickHouse keeper-converter mode: ON
-- bash_completion will be written to /usr/local/share/bash-completion/completions
-- Target check already exists
-- /repos/haggy/ClickHouse/utils: Have 40278 megabytes of memory.
Limiting concurrent linkers jobs to 11 and compiler jobs to OFF (system has 16 logical cores)
-- Configuring done
-- Generating done
-- Build files have been written to: ./repos/haggy/ClickHouse/build
```
### Build
```
#>> cmake --build . --config RelWithDebInfo
[704/7780] Building C object contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/fe-secure-openssl.c.o
FAILED: contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/fe-secure-openssl.c.o
/usr/local/opt/llvm/bin/clang -DHAS_RESERVED_IDENTIFIER -isystem /Users/repos/haggy/ClickHouse/contrib/libpq -isystem /Users/repos/haggy/ClickHouse/contrib/libpq/include -isystem /Users/repos/haggy/ClickHouse/contrib/libpq/configs -isystem /Users/repos/haggy/ClickHouse/contrib/libcxx/include -isystem /Users/repos/haggy/ClickHouse/contrib/libcxx/src -isystem /Users/repos/haggy/ClickHouse/contrib/libcxxabi/include -isystem /Users/repos/haggy/ClickHouse/contrib/boringssl/include -I/usr/local/opt/zlib/include -I/usr/local/opt/lbzip2/include -I/usr/local/opt/bzip2/include -I/usr/local/opt/sqlite/include -I/usr/local/opt/readline/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/snappy/include -I/usr/local/opt/libmemcached/include -I/usr/local/opt/[email protected]/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include -fdiagnostics-color=always -Xclang -fuse-ctor-homing -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/Users/repos/haggy/ClickHouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fexperimental-new-pass-manager -w -O2 -g -DNDEBUG -O3 -g -gdwarf-4 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk -mmacosx-version-min=10.15 -D OS_DARWIN -Werror -std=gnu11 -MD -MT contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/fe-secure-openssl.c.o -MF contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/fe-secure-openssl.c.o.d -o contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/fe-secure-openssl.c.o -c /Users/repos/haggy/ClickHouse/contrib/libpq/fe-secure-openssl.c
/Users/repos/haggy/ClickHouse/contrib/libpq/fe-secure-openssl.c:1732:27: error: invalid application of 'sizeof' to an incomplete type 'BIO_METHOD' (aka 'struct bio_method_st')
my_bio_methods = malloc(sizeof(BIO_METHOD));
^ ~~~~~~~~~~~~
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/bio.h:250:16: note: forward declaration of 'struct bio_method_st'
typedef struct bio_method_st BIO_METHOD;
^
/Users/repos/haggy/ClickHouse/contrib/libpq/fe-secure-openssl.c:1735:32: error: invalid application of 'sizeof' to an incomplete type 'BIO_METHOD' (aka 'struct bio_method_st')
memcpy(my_bio_methods, biom, sizeof(BIO_METHOD));
^ ~~~~~~~~~~~~
/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include/secure/_string.h:63:33: note: expanded from macro 'memcpy'
__builtin___memcpy_chk (dest, __VA_ARGS__, __darwin_obsz0 (dest))
^~~~~~~~~~~
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/bio.h:250:16: note: forward declaration of 'struct bio_method_st'
typedef struct bio_method_st BIO_METHOD;
^
/Users/repos/haggy/ClickHouse/contrib/libpq/fe-secure-openssl.c:1736:17: error: incomplete definition of type 'struct bio_method_st'
my_bio_methods->bread = my_sock_read;
~~~~~~~~~~~~~~^
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/bio.h:250:16: note: forward declaration of 'struct bio_method_st'
typedef struct bio_method_st BIO_METHOD;
^
/Users/repos/haggy/ClickHouse/contrib/libpq/fe-secure-openssl.c:1737:17: error: incomplete definition of type 'struct bio_method_st'
my_bio_methods->bwrite = my_sock_write;
~~~~~~~~~~~~~~^
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/bio.h:250:16: note: forward declaration of 'struct bio_method_st'
typedef struct bio_method_st BIO_METHOD;
^
4 errors generated.
[706/7780] Building C object contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/common/hmac_openssl.c.o
FAILED: contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/common/hmac_openssl.c.o
/usr/local/opt/llvm/bin/clang -DHAS_RESERVED_IDENTIFIER -isystem /Users/repos/haggy/ClickHouse/contrib/libpq -isystem /Users/repos/haggy/ClickHouse/contrib/libpq/include -isystem /Users/repos/haggy/ClickHouse/contrib/libpq/configs -isystem /Users/repos/haggy/ClickHouse/contrib/libcxx/include -isystem /Users/repos/haggy/ClickHouse/contrib/libcxx/src -isystem /Users/repos/haggy/ClickHouse/contrib/libcxxabi/include -isystem /Users/repos/haggy/ClickHouse/contrib/boringssl/include -I/usr/local/opt/zlib/include -I/usr/local/opt/lbzip2/include -I/usr/local/opt/bzip2/include -I/usr/local/opt/sqlite/include -I/usr/local/opt/readline/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/snappy/include -I/usr/local/opt/libmemcached/include -I/usr/local/opt/[email protected]/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include -fdiagnostics-color=always -Xclang -fuse-ctor-homing -gdwarf-aranges -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/Users/repos/haggy/ClickHouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fexperimental-new-pass-manager -w -O2 -g -DNDEBUG -O3 -g -gdwarf-4 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk -mmacosx-version-min=10.15 -D OS_DARWIN -Werror -std=gnu11 -MD -MT contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/common/hmac_openssl.c.o -MF contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/common/hmac_openssl.c.o.d -o contrib/libpq-cmake/CMakeFiles/_libpq.dir/__/libpq/common/hmac_openssl.c.o -c /Users/repos/haggy/ClickHouse/contrib/libpq/common/hmac_openssl.c
/Users/repos/haggy/ClickHouse/contrib/libpq/common/hmac_openssl.c:93:23: error: invalid application of 'sizeof' to an incomplete type 'HMAC_CTX' (aka 'struct hmac_ctx_st')
ctx->hmacctx = ALLOC(sizeof(HMAC_CTX));
^ ~~~~~~~~~~
/Users/repos/haggy/ClickHouse/contrib/libpq/common/hmac_openssl.c:49:28: note: expanded from macro 'ALLOC'
#define ALLOC(size) malloc(size)
^~~~
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/ossl_typ.h:102:16: note: forward declaration of 'struct hmac_ctx_st'
typedef struct hmac_ctx_st HMAC_CTX;
^
/Users/repos/haggy/ClickHouse/contrib/libpq/common/hmac_openssl.c:114:26: error: invalid application of 'sizeof' to an incomplete type 'HMAC_CTX' (aka 'struct hmac_ctx_st')
memset(ctx->hmacctx, 0, sizeof(HMAC_CTX));
^ ~~~~~~~~~~
/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include/secure/_string.h:77:33: note: expanded from macro 'memset'
__builtin___memset_chk (dest, __VA_ARGS__, __darwin_obsz0 (dest))
^~~~~~~~~~~
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/ossl_typ.h:102:16: note: forward declaration of 'struct hmac_ctx_st'
typedef struct hmac_ctx_st HMAC_CTX;
^
/Users/repos/haggy/ClickHouse/contrib/libpq/common/hmac_openssl.c:250:31: error: invalid application of 'sizeof' to an incomplete type 'HMAC_CTX' (aka 'struct hmac_ctx_st')
explicit_bzero(ctx->hmacctx, sizeof(HMAC_CTX));
^ ~~~~~~~~~~
/usr/local/Cellar/[email protected]/1.1.1n/include/openssl/ossl_typ.h:102:16: note: forward declaration of 'struct hmac_ctx_st'
typedef struct hmac_ctx_st HMAC_CTX;
^
3 errors generated.
[721/7780] Building CXX object contrib/nuraft-cmake/CMakeFiles/_nuraft.dir/__/NuRaft/src/asio_service.cxx.o
ninja: build stopped: subcommand failed.
``` | https://github.com/ClickHouse/ClickHouse/issues/36810 | https://github.com/ClickHouse/ClickHouse/pull/36819 | 65f33a30891871a78fe84b778cfa15bdf8be5897 | 8740d82bb5481aabebcfab1bcaf0e16450d3636d | "2022-04-29T17:55:08Z" | c++ | "2022-04-30T10:32:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,808 | ["src/Functions/FunctionsHashing.h", "tests/queries/0_stateless/02292_hash_array_tuples.reference", "tests/queries/0_stateless/02292_hash_array_tuples.sql"] | Hash functions are not supported for `Array(Tuple(...))` columns | **Describe the unexpected behaviour**
Hash functions are not supported for `Array(Tuple(...))` columns:
```sql
SELECT cityHash64([(1, 2), (2, 3)])
Received exception from server (version 22.5.1):
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Method getDataAt is not supported for Tuple(UInt8, UInt8): While processing cityHash64([(1, 2), (2, 3)]). (NOT_IMPLEMENTED)
``` | https://github.com/ClickHouse/ClickHouse/issues/36808 | https://github.com/ClickHouse/ClickHouse/pull/36812 | 13e8db62992637c446722568f81097afb54f5b9c | 0caf91602fc383f56682b0328babf9491f4bb62b | "2022-04-29T17:07:11Z" | c++ | "2022-05-06T12:15:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,797 | ["CMakeLists.txt", "cmake/ld.lld.in", "cmake/split_debug_symbols.cmake", "cmake/tools.cmake", "src/Common/Elf.cpp", "src/Common/SymbolIndex.cpp", "tests/queries/0_stateless/02161_addressToLineWithInlines.sql", "tests/queries/0_stateless/02420_stracktrace_debug_symbols.reference", "tests/queries/0_stateless/02420_stracktrace_debug_symbols.sh"] | introspection functions don't load debug symbols from clickhouse-common-static-dbg anymore | null | https://github.com/ClickHouse/ClickHouse/issues/36797 | https://github.com/ClickHouse/ClickHouse/pull/40873 | 365438d6172cb643603d59a81c12eb3f10d4c5e6 | 499e479892b68414f087a19759fe3600508e3bb3 | "2022-04-29T13:41:54Z" | c++ | "2022-09-07T15:31:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,792 | ["src/Interpreters/InterpreterInsertQuery.cpp", "tests/queries/0_stateless/01119_optimize_trivial_insert_select.reference", "tests/queries/0_stateless/01119_optimize_trivial_insert_select.sql"] | Optimization of trivial INSERT SELECT significantly slows down some queries |
**The creation of table becomes too slow when I use CTEs within the query.**
***ClickHouse server version 22.4.3***
**How to reproduce**
Trying to create a table based on the result of a query, I take very bad performance when I use a CTE:
```
--creation of table t and insertion of 10000000000 records
--exec time: ~260 second
drop table if exists t;
create table t(a Int64) engine = MergeTree() order by a;
insert into t SELECT * FROM system.numbers LIMIT 10000000000;
--scenario 1
--exec time: ~200 seconds
drop table if exists t2;
create table t2 engine = MergeTree() order by c
as (
with
cte1 as (
SELECT * FROM t WHERE modulo(a,2)=1
)
SELECT count(*) as c FROM cte1
);
--end of scenario 1
--scenario 2
--exec time: ~10 seconds
drop table if exists t3;
create table t3 engine = MergeTree() order by c
as (
SELECT count(*) as c FROM t WHERE modulo(a,2)=1
);
--end of scenario 2
```
Although the two scenarios do exactly the same thing, with the difference that in the first scenario we use an intermediate CTE, there is a huge difference in the execution time.
The issue becomes even stranger if you execute the following queries and see that the execution time is the same, that in generally makes sense.
```
--query from scenario 1
--exec time: ~9 seconds
with
cte1 as (
SELECT * FROM t WHERE modulo(a,2)=1
)
SELECT count(*) as c FROM t WHERE modulo(a,2)=1
--query from scenario 2
--exec time: ~9 seconds
SELECT count(*) as c FROM t WHERE modulo(a,2)=1
```
Obviously, there is a performance issue when someone wants to `CREATE TABLE AS` and uses CTEs in the query.
Does anyone know why clickhouse behaves in that strange way?
| https://github.com/ClickHouse/ClickHouse/issues/36792 | https://github.com/ClickHouse/ClickHouse/pull/37047 | a3df693acee9e3be7bd787c5191d4ed734f5b5f2 | a3d922d6f6127d99704c48bbb7e5c7eaae286780 | "2022-04-29T12:15:49Z" | c++ | "2022-05-10T10:02:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,770 | ["src/Functions/serverConstants.cpp", "tests/config/config.d/display_name.xml", "tests/config/install.sh", "tests/queries/0_stateless/02313_displayname.reference", "tests/queries/0_stateless/02313_displayname.sql"] | Add miscellaneous function `displayName()` | **Use case**
It is better than hostname in the following example:
```
SELECT hostname() AS host, event_time, round(value, 2) AS v, bar(v, 0, 10, 100) FROM system.asynchronous_metric_log WHERE event_date >= yesterday() AND event_time >= now() - 300 AND metric = 'LoadAverage1' ORDER BY host, event_time
```
Because hostnames can be non-informative like `ip-172-31-5-46`.
| https://github.com/ClickHouse/ClickHouse/issues/36770 | https://github.com/ClickHouse/ClickHouse/pull/37681 | 88033562cd3906537cdf60004479495ccedd3061 | c4eb83408f5712500ed3668c026d346b5acdb3a6 | "2022-04-29T04:33:25Z" | c++ | "2022-11-08T09:53:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,714 | ["src/AggregateFunctions/AggregateFunctionMannWhitney.h", "src/AggregateFunctions/AggregateFunctionStudentTTest.cpp", "src/AggregateFunctions/AggregateFunctionTTest.h", "src/AggregateFunctions/AggregateFunctionWelchTTest.cpp", "src/AggregateFunctions/StatCommon.h", "tests/queries/0_stateless/01560_mann_whitney.reference", "tests/queries/0_stateless/02293_ttest_large_samples.reference", "tests/queries/0_stateless/02293_ttest_large_samples.sql"] | Incorrect pvalue in studentTTest | **Describe what's wrong**
When using the studentTTest or welchTTest function these sometimes return the pvalue "1" while this result is wrong. I can't see a very specific pattern to it, but it might be related to high numbers of input rows as these are the only examples I can produce. Here two examples:
```
SELECT
studentTTest(sample, variant)
FROM (
SELECT
toFloat64(number) % 30 AS sample,
0 AS variant
FROM system.numbers limit 500000
UNION ALL
SELECT
toFloat64(number) % 30 + 0.0022 AS sample,
1 AS variant
FROM system.numbers limit 500000
)
```
Out put in the above is `(-0.12708812085024285,1)`. The t-statsitic value (-0.12..) appears correct to me. However the pvalue based on this t-statistic should be about 0.9. A more extreme example can be created by keeping the same example but increasing the number of rows (however, this scale of input rows is still very common in our use-case):
```
SELECT
studentTTest(sample, variant)
FROM (
SELECT
toFloat64(number) % 30 AS sample,
0 AS variant
FROM system.numbers limit 50000000
UNION ALL
SELECT
toFloat64(number) % 30 + 0.0022 AS sample,
1 AS variant
FROM system.numbers limit 50000000
)
```
Output is `(-1.2708804531666449,1)`. Again the t-statistic looks correct to me, while the pvalue should be about 0.2.
Having the option to do a ttest directly in clickhouse is awesome and it would be great if this is being tackled, as it keeps us from utilizing it.
My clickhouse version: 22.3.3.44.
I don't have the setup to check it for the more recent versions unfortunately.
Thank you! | https://github.com/ClickHouse/ClickHouse/issues/36714 | https://github.com/ClickHouse/ClickHouse/pull/36953 | dcad1541052f0e52d97b7f4d91620a93b5f8fe23 | 85a1204e959885bbed869477f8c4893914972661 | "2022-04-27T14:57:48Z" | c++ | "2022-06-07T13:39:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,687 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/MergeTree/MergeTreeBlockReadUtils.cpp", "src/Storages/MergeTree/MergeTreeBlockReadUtils.h", "src/Storages/MergeTree/MergeTreeReadPool.cpp", "src/Storages/MergeTree/MergeTreeSelectProcessor.cpp", "src/Storages/MergeTree/MergeTreeSequentialSource.cpp", "tests/queries/0_stateless/02286_vertical_merges_missed_column.reference", "tests/queries/0_stateless/02286_vertical_merges_missed_column.sql"] | 22.4 Vertical merges of wide parts are broken | Vertical merges of wide parts fail if there is an array in the table and not all columns are present in parts.
```sql
CREATE TABLE aaa
(
a Array(Int16),
b Int8
)
ENGINE = MergeTree
ORDER BY tuple()
settings
vertical_merge_algorithm_min_columns_to_activate=1,
vertical_merge_algorithm_min_rows_to_activate=1,
min_bytes_for_wide_part=0;
insert into aaa select [], 0;
alter table aaa clear column b;
optimize table aaa final;
Received exception from server (version 22.4.2):
Code: 16. DB::Exception: Received from localhost:9000.
DB::Exception: There is no column a.size0 in table. (NO_SUCH_COLUMN_IN_TABLE)
``` | https://github.com/ClickHouse/ClickHouse/issues/36687 | https://github.com/ClickHouse/ClickHouse/pull/36707 | 73b451160d02e372a012f42fcae0b2a24cf850e9 | 9fb1d92ff54518f6d6d16b8f3c6c0bf1ed9c0b0a | "2022-04-26T18:50:55Z" | c++ | "2022-04-29T07:09:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,648 | ["programs/server/play.html"] | Play UI: if there is exactly only one record in the resultset and the number of columns is large, display it in vertical format. | **Use case**
I want it. | https://github.com/ClickHouse/ClickHouse/issues/36648 | https://github.com/ClickHouse/ClickHouse/pull/36811 | 9621d443459f0549bf4cb9ea181d381d36034728 | 474a805ea7586ee562f701919965c434101d713a | "2022-04-25T22:25:57Z" | c++ | "2022-05-02T00:44:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,618 | ["src/Parsers/ParserCreateQuery.h", "tests/queries/0_stateless/02287_ephemeral_format_crash.reference", "tests/queries/0_stateless/02287_ephemeral_format_crash.sql"] | Incorrect syntax of EPHEMERAL column leads to clickhouse-client crash | ```
-- correct syntax:
CREATE TABLE test_aborted1 (a UInt8, b String EPHEMERAL) Engine=Memory();
-- ok
-- incorrect syntax:
CREATE TABLE test_aborted1 (a UInt8, b EPHEMERAL String) Engine=Memory();
-- in clickhouse-client: Aborted
(lldb) bt
* thread #1, name = 'clickhouse-clie', stop reason = signal SIGSEGV: invalid address (fault address: 0x80)
* frame #0: 0x0000000014961c59 clickhouse-client`DB::ASTColumnDeclaration::formatImpl(DB::IAST::FormatSettings const&, DB::IAST::FormatState&, DB::IAST::FormatStateStacked) const [inlined] DB::Field::isNull(this=0x0000000000000050) const at Field.h:421:40
frame #1: 0x0000000014961c59 clickhouse-client`DB::ASTColumnDeclaration::formatImpl(this=0x00007f67c252aa58, settings=0x00007fff8b1187f8, state=0x00007fff8b1187e0, frame=FormatStateStacked @ 0x00000000072c3190) const at ASTColumnDeclaration.cpp:76:94
frame #2: 0x0000000014974efb clickhouse-client`DB::ASTExpressionList::formatImplMultiline(this=0x00007fff8b1185b8, settings=0x00007fff8b1187f8, state=0x00007fff8b1187e0, frame=FormatStateStacked @ 0x000000000b8baa80) const at ASTExpressionList.cpp:57:16
frame #3: 0x000000001496b788 clickhouse-client`DB::ASTColumns::formatImpl(this=0x00007f67c401efa8, s=0x00007fff8b1187f8, state=0x00007fff8b1187e0, frame=FormatStateStacked @ 0x000000000eb7f520) const at ASTCreateQuery.cpp:185:18
frame #4: 0x000000001496d6b2 clickhouse-client`DB::ASTCreateQuery::formatQueryImpl(this=0x00007f67c24fc598, settings=0x00007fff8b1187f8, state=0x00007fff8b1187e0, frame=FormatStateStacked @ 0x00000000090c7600) const at ASTCreateQuery.cpp:380:23
frame #5: 0x0000000014989025 clickhouse-client`DB::ASTQueryWithOutput::formatImpl(this=0x00007f67c24fc598, s=0x00007fff8b1187f8, state=0x00007fff8b1187e0, frame=FormatStateStacked @ 0x00000000096c7400) const at ASTQueryWithOutput.cpp:27:5
frame #6: 0x0000000014a11076 clickhouse-client`DB::formatAST(DB::IAST const&, DB::WriteBuffer&, bool, bool) [inlined] DB::IAST::format(this=<unavailable>, settings=0x00007fff8b1187f8) const at IAST.h:233:9
frame #7: 0x0000000014a11052 clickhouse-client`DB::formatAST(ast=0x00007f67c24fc598, buf=0x00007fff8b118870, hilite=true, one_line=<unavailable>) at formatAST.cpp:12:9
frame #8: 0x0000000014503f66 clickhouse-client`DB::ClientBase::parseQuery(this=0x00007fff8b11a510, pos=<unavailable>, end=<unavailable>, allow_multi_statements=<unavailable>) const at ClientBase.cpp:316:9
frame #9: 0x00000000145069f8 clickhouse-client`DB::ClientBase::processTextAsSingleQuery(this=0x00007fff8b11a510, full_query="CREATE TABLE test_aborted (\n a UInt64,\n b EPHEMERAL String\n) Engine=MergeTree \nORDER BY tuple();") at ClientBase.cpp:632:25
frame #10: 0x000000001450f2df clickhouse-client`DB::ClientBase::processQueryText(this=0x00007fff8b11a510, text="CREATE TABLE test_aborted (\n a UInt64,\n b EPHEMERAL String\n) Engine=MergeTree \nORDER BY tuple();") at ClientBase.cpp:1791:9
frame #11: 0x000000001451085a clickhouse-client`DB::ClientBase::runInteractive(this=<unavailable>) at ClientBase.cpp:1939:18
frame #12: 0x000000000afbbcd5 clickhouse-client`DB::Client::main(this=0x00007fff8b11a510, (null)=<unavailable>) at Client.cpp:251:9
frame #13: 0x0000000017735f66 clickhouse-client`Poco::Util::Application::run(this=0x00007fff8b11a510) at Application.cpp:334:8
frame #14: 0x000000000afc4f01 clickhouse-client`mainEntryClickHouseClient(argc=1, argv=0x00007f67c40ef038) at Client.cpp:1057:23
frame #15: 0x000000000af06e31 clickhouse-client`main(argc_=<unavailable>, argv_=<unavailable>) at main.cpp:409:12
frame #16: 0x00007f67c4f700b3 libc.so.6`__libc_start_main + 243
frame #17: 0x000000000af066ae clickhouse-client`_start + 46
``` | https://github.com/ClickHouse/ClickHouse/issues/36618 | https://github.com/ClickHouse/ClickHouse/pull/36633 | db9cb4cf092910935101c81c6ad90d6f39dfc1be | 7001489095a3b4412c60f2e3da751f30a988c368 | "2022-04-25T12:11:54Z" | c++ | "2022-04-28T16:38:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,558 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02287_type_object_convert.reference", "tests/queries/0_stateless/02287_type_object_convert.sql"] | CAST Object to Object with Nullable subcolumns doesn't work | I expect it should work:
```
:) select CAST(CAST('{"x" : 1}', 'Object(\'json\')'), 'Object(Nullable(\'json\'))')
SELECT CAST(CAST('{"x" : 1}', 'Object(\'json\')'), 'Object(Nullable(\'json\'))')
Query id: fb057b45-b72a-4d22-9b38-a3136cdf2465
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 22.5.1):
Code: 53. DB::Exception: Received from localhost:9000. DB::Exception: Cast to Object can be performed only from flatten named Tuple, Map or String. Got: Object('json'): While processing CAST(CAST('{"x" : 1}', 'Object(\'json\')'), 'Object(Nullable(\'json\'))'). (TYPE_MISMATCH)
```
@CurtizJ what do you think? | https://github.com/ClickHouse/ClickHouse/issues/36558 | https://github.com/ClickHouse/ClickHouse/pull/36564 | 77e55c344c7c68288d514b36e3c4eb5fa6afa179 | 9c1a06703a2f4cae38060534511ce5468d86ff87 | "2022-04-22T18:29:37Z" | c++ | "2022-05-04T18:40:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,530 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/02233_interpolate_1.reference", "tests/queries/0_stateless/02233_interpolate_1.sql"] | INTERPOLATE doesn't work for MergeTree | **Describe what's wrong**
INTERPOLATE doesn't work for tables with ENGINE = MergeTree
**Does it reproduce on recent release?**
yes
**How to reproduce**
```
CREATE TABLE t1 (n Int32) ENGINE = MergeTree ORDER BY n;
INSERT INTO t1 VALUES (1),(3),(3),(6),(6),(6);
SELECT
n,
count() AS m
FROM t1
GROUP BY n
ORDER BY n ASC WITH FILL
INTERPOLATE ( m AS m + 1 );
Received exception from server (version 22.5.1):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'm' while processing query: 'SELECT n, count() FROM t1 GROUP BY n ORDER BY n ASC WITH FILL INTERPOLATE ( m AS m + 1 )', required columns: 'n' 'm', maybe you meant: ['n','n']. (UNKNOWN_IDENTIFIER)
```
**Expected behavior**
same as
```
CREATE TABLE t1 (n Int32) ENGINE = Memory;
INSERT INTO t1 VALUES (1),(3),(3),(6),(6),(6);
SELECT
n,
count() AS m
FROM t1
GROUP BY n
ORDER BY n ASC WITH FILL
INTERPOLATE ( m AS m + 1 );
ββnββ¬βmββ
β 1 β 1 β
β 2 β 2 β
β 3 β 2 β
β 4 β 3 β
β 5 β 4 β
β 6 β 3 β
βββββ΄ββββ
```
**Error message and/or stacktrace**
```
Received exception from server (version 22.5.1):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'm' while processing query: 'SELECT n, count() FROM t1 GROUP BY n ORDER BY n ASC WITH FILL INTERPOLATE ( m AS m + 1 )', required columns: 'n' 'm', maybe you meant: ['n','n']. (UNKNOWN_IDENTIFIER)
```
**Additional context**
INTERPOLATE is a new feature added in #35349 | https://github.com/ClickHouse/ClickHouse/issues/36530 | https://github.com/ClickHouse/ClickHouse/pull/36549 | 34c342fdd3164c18db339d07de21c54ef92b0a84 | 0fa63a8d65937b39f3f4db5d133b2090dfd89f12 | "2022-04-22T04:44:28Z" | c++ | "2022-04-25T11:18:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,523 | ["src/DataTypes/DataTypeTuple.cpp", "tests/queries/0_stateless/02286_tuple_numeric_identifier.reference", "tests/queries/0_stateless/02286_tuple_numeric_identifier.sql"] | JSON data type: Explicitly specified names of tuple elements cannot start with digit | **Describe the unexpected behaviour**
An error is received when inserting to a table with JSON column type:
```
Received exception from server (version 22.3.3):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Explicitly specified names of tuple elements cannot start with digit. (BAD_ARGUMENTS)
```
**How to reproduce**
```sql
set allow_experimental_object_type=true;
CREATE TABLE json_test (data JSON) ENGINE=MergeTree PRIMARY KEY tuple();
INSERT INTO json_test (data) VALUES('{ "12344": true }');
```
result:
```
INSERT INTO json_test (data) FORMAT Values
1 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 22.3.3):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Explicitly specified names of tuple elements cannot start with digit. (BAD_ARGUMENTS)
```
* Which ClickHouse server version to use
22.3.3.44
**Expected behavior**
It must be possible to insert such JSON to the table since its a valid JSON:
```
select isValidJSON('{ "12344": true }');
ββisValidJSON('{ "12344": true }')ββ
β 1 β
ββββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/36523 | https://github.com/ClickHouse/ClickHouse/pull/36544 | bd1f12e5d50595728e3c6078f3ed73eaac696dfa | 17bb7f175b8f5b3ed7ff90ed552c40a85133e696 | "2022-04-21T23:14:07Z" | c++ | "2022-04-26T10:22:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,474 | ["src/Interpreters/SessionLog.cpp"] | session_log.interface is missing enum values | ```
select distinct interface from system.session_log
Code: 36. DB::Exception: Unexpected value 6 in enum: While executing ParallelFormattingOutputFormat. (BAD_ARGUMENTS) (version 22.3.3.44 (official build))
```
Current interfaces in table definitions:
```
`interface` Enum8('TCP' = 1, 'HTTP' = 2, 'gRPC' = 3, 'MySQL' = 4, 'PostgreSQL' = 5),
```
There are two more in ClickHouse already:
Interface::LOCAL
Interface::TCP_INTERSERVER
cc: @Enmk | https://github.com/ClickHouse/ClickHouse/issues/36474 | https://github.com/ClickHouse/ClickHouse/pull/36480 | 064a4a9a6996823ef520187b860d572fbd34e35c | 9713441650fc0781726d25d258a6f550f4b47ca3 | "2022-04-21T06:45:29Z" | c++ | "2022-04-21T19:14:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,451 | ["src/Dictionaries/getDictionaryConfigurationFromAST.cpp"] | ClickHouse does not start if dns entry is invalid for external ClickHouse dictionary | **Describe the unexpected behavior**
Create a dictionary of source type ClickHouse using a remote instance. Remove the remote instance and its associated DNS address. If we restart the local instance with the dictionary, it will refuse to start with the following exception:
```
0. Poco::Net::HostNotFoundException::HostNotFoundException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x186517ac in /usr/bin/clickhouse
1. Poco::Net::DNS::aierror(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1862778f in /usr/bin/clickhouse
2. Poco::Net::SocketAddress::init(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned short) @ 0x1865d079 in /usr/bin/clickhouse
```
**How to reproduce**
Reproduced on version `22.3.1` and `22.4.1.2116`
1. Ensure you have 2 instances available, accessible over a DNS entry (using the host file if necessary)
2. Designate one instance `local` and one `remote`
3. Create a table with the dictionary data on the remote instance. Example below with public data.
```
CREATE TABLE zips (
zip LowCardinality(String),
city LowCardinality(String)
) ENGINE = MergeTree() ORDER BY (zip, city);
INSERT INTO zips SELECT zip, city FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/zips/zips.tsv', 'TSV', 'zip String, city String');
```
4. Create a dictionary on the local instance.
```
CREATE DICTIONARY zip_codes
(
zip String,
city String
)
PRIMARY KEY zip
SOURCE (ClickHouse(HOST 'remote_instance' PORT 9200 USER 'default' PASSWORD 'password' DB 'default' TABLE 'zips' SECURE 0 QUERY 'SELECT zip, city FROM default.zips'))
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000);
```
5. Shutdown the remote instance and remove the DNS entry e.g. delete from hosts file.
6. Restart the local instance. It will fail to restart with the following DNS entry.
**Expected behavior**
ClickHouse should start. ClickHouse does start if the DNS entry is valid but the remote instance is not started.
**Error message and/or stacktrace**
Example
```
3. DB::getInfoIfClickHouseDictionarySource(Poco::AutoPtr<Poco::Util::AbstractConfiguration>&, std::__1::shared_ptr<DB::Context const>) @ 0x14012de6 in /usr/bin/clickhouse
4. DB::DDLDependencyVisitor::visit(DB::ASTFunctionWithKeyValueArguments const&, DB::DDLDependencyVisitor::Data&) @ 0x14469b55 in /usr/bin/clickhouse
7. DB::InDepthNodeVisitor<DB::DDLDependencyVisitor, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x1446947a in /usr/bin/clickhouse
8. DB::InDepthNodeVisitor<DB::DDLDependencyVisitor, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x144694ce in /usr/bin/clickhouse
9. DB::InDepthNodeVisitor<DB::DDLDependencyVisitor, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x144694ce in /usr/bin/clickhouse
10. DB::getDependenciesSetFromCreateQuery(std::__1::shared_ptr<DB::Context const>, DB::QualifiedTableName const&, std::__1::shared_ptr<DB::IAST> const&) @ 0x14468eec in /usr/bin/clickhouse
11. ? @ 0x145129cd in /usr/bin/clickhouse
12. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xaf6546a in /usr/bin/clickhouse
13. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xaf674a4 in /usr/bin/clickhouse
14. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xaf62837 in /usr/bin/clickhouse
15. ? @ 0xaf662fd in /usr/bin/clickhouse
16. ? @ 0x7f9f10f52609 in ?
17. clone @ 0x7f9f10e77163 in ?
(version 22.2.2.1)
2022.04.20 12:17:08.789285 [ 360788 ] {} <Error> Application: Host not found: single-node-clickhouse-blue-1.localdomain
``` | https://github.com/ClickHouse/ClickHouse/issues/36451 | https://github.com/ClickHouse/ClickHouse/pull/36463 | 5b5dd4fa4673d0137e56c8bbe3af28444866cd54 | 7f50bebba114abfb947a92995b588f2757a7e487 | "2022-04-20T11:54:14Z" | c++ | "2022-04-22T12:28:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,448 | ["src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/02285_executable_user_defined_function_group_by.reference", "tests/queries/0_stateless/02285_executable_user_defined_function_group_by.sql"] | Executable User Defined Functions used by aggregate function error "Unknown function" | Always reproduced (ClickHouse server version 22.3.2.1):
`
udf code, python version 3.9.6
#!/usr/bin/python
import sys
import time
if __name__ == '__main__':
for line in sys.stdin:
print("Python4 " + line, end='')
sys.stdout.flush()
`
SELECT test_function_python('clickhouse')
Query id: 57da3834-1238-4da6-a984-48d4c1bb4a04
ββtest_function_python('clickhouse')ββ
β Python4 clickhouse β
ββββββββββββββββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.001 sec.
SELECT test_function_python('clickhouse') AS uuu
GROUP BY uuu
Query id: c14131e5-e7aa-4585-a9f5-8e4b3e29fe44
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 22.3.2):
Code: 46. DB::Exception: Received from 10.254.134.108:9000. DB::Exception: Unknown function test_function_python. (UNKNOWN_FUNCTION)
SELECT uuu
FROM
(
SELECT test_function_python('clickhouse') AS uuu
) AS tmp
GROUP BY uuu
Query id: 20ca737b-35e1-418a-ab8e-2a296ac44e28
ββuuuβββββββββββββββββ
β Python4 clickhouse β
ββββββββββββββββββββββ
1 rows in set. Elapsed: 0.002 sec. | https://github.com/ClickHouse/ClickHouse/issues/36448 | https://github.com/ClickHouse/ClickHouse/pull/36486 | 8af3159916eb67c9f2a1356d4cc5f644b1fe1f6d | fe004c3486b9275939a97ea29f7a0000e74f8427 | "2022-04-20T11:49:13Z" | c++ | "2022-04-24T03:11:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,377 | ["src/Storages/StorageReplicatedMergeTree.cpp", "tests/queries/0_stateless/02020_alter_table_modify_comment.reference", "tests/queries/0_stateless/02302_ReplicatedMergeTree_comment.reference", "tests/queries/0_stateless/02302_ReplicatedMergeTree_comment.sql"] | ALTER TABLE MODIFY COMMENT does not work on ReplicatedMergeTree | We use Clickhouse version 22.3.3.44, and met with with some wrong behaviour -
queries "ALTER TABLE ... MODIFY COMMENT ... " does not work on ReplicatedMergeTree engine.
Bug reproduce script:
```
CREATE TABLE sandbox.test__calendar
(
`dttm` DateTime COMMENT 'date and time',
`timestamp` Int64 COMMENT 'event timestamp',
`year_num` Int32 COMMENT 'year number'
)
ENGINE = ReplicatedMergeTree() PARTITION BY year_num ORDER BY timestamp
COMMENT 'Comment text for test table';
SELECT comment FROM system.tables WHERE name like 'test__calendar'; -- 1 row selected with correct comment text
ALTER TABLE sandbox.test__calendar MODIFY COMMENT 'Some new more detailed text of comment';
SELECT comment FROM system.tables WHERE name like 'test__calendar'; -- 1 row selected, but with OLD(!) comment text
```
Maybe it will be helpfull, ALTER TABLE query returns such result:
```
host |status|error|num_hosts_remaining|num_hosts_active|
----------------+------+----+-----------------------+------------------+
10.7.179.41 | 0| | 1| 0|
ak1-st-data-01 | 0| | 0| 0|
``` | https://github.com/ClickHouse/ClickHouse/issues/36377 | https://github.com/ClickHouse/ClickHouse/pull/37416 | 5f10c6e8893aadf40b1b30c500d6c89cd3b21b49 | f63fa9bcc680fa4ac412c67ae484a0af84be5063 | "2022-04-18T07:48:28Z" | c++ | "2022-05-27T16:29:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,367 | ["src/Disks/IO/ThreadPoolReader.cpp", "src/Disks/IO/ThreadPoolRemoteFSReader.cpp", "src/IO/WriteBufferFromS3.cpp", "src/Interpreters/ThreadStatusExt.cpp", "src/Processors/Transforms/buildPushingToViewsChain.cpp"] | When I use Kafka table: there might be a memory leak after upgrading to version 22.3.3 | We use clickhosue version 21.12.2.17 before.It works well
But we found that after upgrade to 22.3.3, the memory of clickhouse used grow up daily .
I don't know if this is a memory overflow
here is the promethues metric 'ClickHouseMetrics_MemoryTracking' duration last 15days

and I restarted cknode1 at 4.14 ,you can see it come down and rising again.
And here is some sql and result I have run for diagnosisγ




we use dict and MaterializedView . no living view.
we have tried 'system drop mark cache ' and change 'max_bytes_before_external_group_by' to '6g'.Not work
Is there anyting else I can do?
| https://github.com/ClickHouse/ClickHouse/issues/36367 | https://github.com/ClickHouse/ClickHouse/pull/40732 | eb87e3df16592fe42d501b99a6254957626c4165 | 88141cae98d7dad12720423c9ddbb98551d089f7 | "2022-04-18T02:27:52Z" | c++ | "2022-08-29T17:36:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,354 | ["src/TableFunctions/ITableFunctionFileLike.cpp", "src/TableFunctions/ITableFunctionFileLike.h", "src/TableFunctions/TableFunctionFile.cpp", "src/TableFunctions/TableFunctionFile.h", "tests/queries/0_stateless/02286_use_file_descriptor_in_table_function_file.reference", "tests/queries/0_stateless/02286_use_file_descriptor_in_table_function_file.sh"] | Allow file descriptors in table function `file` if it is run in `clickhouse-local` | **Use case**
```
clickhouse-local --query "SELECT * FROM file(0, JSONEachRow)" < something
```
**Additional context**
I also tried other intuitive options:
```
clickhouse-local --query "SELECT * FROM file(stdin, JSONEachRow)" < something
clickhouse-local --query "SELECT * FROM file('-', JSONEachRow)" < something
```
Although this works:
```
clickhouse-local --query "SELECT * FROM file('/dev/stdin', JSONEachRow)"
```
I think, solution with numeric file descriptors should be supported, as we already support it for `File` table engine,
and it is also useful if a script is opening file descriptors with higher numbers like
```
clickhouse-local --query "SELECT * FROM file(5, JSONEachRow)" 5<test
```
Note: it is already possible to process stdin in clickhouse-local and it is also intuitive and convenient:
```
clickhouse-local --query "SELECT * table" < something
```
I just want other ways to work consistently. | https://github.com/ClickHouse/ClickHouse/issues/36354 | https://github.com/ClickHouse/ClickHouse/pull/36562 | aaf74914b04a7755c082d12867e7830b65da7bf4 | fd980e6840530db64d96749ff806d27e3a9d47fa | "2022-04-17T16:17:49Z" | c++ | "2022-05-02T11:25:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,340 | ["programs/main.cpp"] | Hostile environments can poison ClickHouse with LD_PRELOAD | **Use case**
https://colab.research.google.com/drive/1wzYn59PA9EDyra6356a8rUpwd3zUX0Zt?usp=sharing
**Describe the solution you'd like**
There are multiple ways to prevent LD_PRELOAD:
- link statically with musl to prevent any chance for LD_PRELOAD working;
- add some flag to the binary like capabilities or setuid to also prevent LD_PRELOAD working;
- don't allow it with some env variable;
- implement our own dynamic loader.
Also possible workaround:
- check if LD_PRELOAD is set very early at program startup. If it is, reset it and exec ourself with the same argc, argv. | https://github.com/ClickHouse/ClickHouse/issues/36340 | https://github.com/ClickHouse/ClickHouse/pull/36342 | eab80dbfdcc3ab9a17f2ed8039bab9499c5ee106 | 956b4b5361f791b12a88aabd808bf5225abed5fb | "2022-04-16T20:59:17Z" | c++ | "2022-04-18T04:01:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,336 | ["src/Client/ClientBase.cpp", "src/Core/Settings.h", "src/Parsers/ParserInsertQuery.cpp", "src/Server/GRPCServer.cpp", "tests/queries/0_stateless/02267_insert_empty_data.reference", "tests/queries/0_stateless/02267_insert_empty_data.sql"] | A setting `throw_if_no_data_to_insert` | **Use case**
ClickHouse already throws exception if the data for INSERT is empty:
```
Code: 108. DB::Exception: No data to insert
```
Let's introduce a setting, enabled by default, to control this behavior.
The user will be able to disable it to allow empty INSERTs.
This is useful in some batch scripts. | https://github.com/ClickHouse/ClickHouse/issues/36336 | https://github.com/ClickHouse/ClickHouse/pull/36345 | f6ab2bd523e6965aaba2e3e7e1f4e083d5db4279 | 1333b4cd8940965f89bc1a2e25184834144e8abe | "2022-04-16T17:53:22Z" | c++ | "2022-04-18T04:04:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,335 | ["src/Core/Settings.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/MergeTree/MergeTreeSink.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeSink.cpp", "tests/queries/0_stateless/02280_add_query_level_settings.reference", "tests/queries/0_stateless/02280_add_query_level_settings.sql"] | `parts_to_delay_insert` and `parts_to_throw_insert` can be also query-level settings | If they are defined, they can override table-level settings. | https://github.com/ClickHouse/ClickHouse/issues/36335 | https://github.com/ClickHouse/ClickHouse/pull/36371 | 641a5f5e350074777ea04854c0202a98abda06d4 | af980ef4efca068254c9abe18be64b6895f6f052 | "2022-04-16T16:53:11Z" | c++ | "2022-04-28T09:08:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,334 | ["src/TableFunctions/TableFunctionNull.cpp", "src/TableFunctions/TableFunctionNull.h", "tests/queries/0_stateless/02267_type_inference_for_insert_into_function_null.reference", "tests/queries/0_stateless/02267_type_inference_for_insert_into_function_null.sql"] | Type inference for INSERT INTO FUNCTION null() | **Use case**
`INSERT INTO FUNCTION null() SELECT * FROM ...`
```
Table function 'null' requires 'structure'.
``` | https://github.com/ClickHouse/ClickHouse/issues/36334 | https://github.com/ClickHouse/ClickHouse/pull/36353 | 0af183826d4f103a75995956f194232bc3ade425 | f5e270b2f858cbaeea6041819ff66c22ec3d1b60 | "2022-04-16T14:57:03Z" | c++ | "2022-04-18T04:06:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,313 | ["programs/server/Server.cpp", "src/Common/getNumberOfPhysicalCPUCores.cpp"] | The server has 96 logical cores (AMD EPYC 7R32) but ClickHouse selected 36 threads. | I don't understand why. | https://github.com/ClickHouse/ClickHouse/issues/36313 | https://github.com/ClickHouse/ClickHouse/pull/36310 | 956b4b5361f791b12a88aabd808bf5225abed5fb | 48fd09c7db7a8ee0b9a65d8dda660be8a0643c95 | "2022-04-15T23:57:02Z" | c++ | "2022-04-18T04:01:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,307 | ["docs/en/sql-reference/data-types/enum.md", "docs/ru/sql-reference/data-types/enum.md", "src/DataTypes/DataTypeEnum.cpp", "tests/queries/0_stateless/00757_enum_defaults.reference", "tests/queries/0_stateless/00757_enum_defaults.sql"] | Possible range issues in automatic assigned enums, also fix error message. | **Describe the unexpected behaviour**
Currently enum is numbered straight through using UInt64 starting from 1, while underlying type either Int8 or Int16 - such inconsistency can lead to some undesirable side effects or run-time errors
**How to reproduce**
current trunk, feature just added in #36101
```
ClickHouse-ubuntu :) CREATE TABLE t (x Enum8(
'00','01','02','03','04','05','06','07','08','09','0A','0B','0C','0D','0E','0F',
'10','11','12','13','14','15','16','17','18','19','1A','1B','1C','1D','1E','1F',
'20','21','22','23','24','25','26','27','28','29','2A','2B','2C','2D','2E','2F',
'30','31','32','33','34','35','36','37','38','39','3A','3B','3C','3D','3E','3F',
'40','41','42','43','44','45','46','47','48','49','4A','4B','4C','4D','4E','4F',
'50','51','52','53','54','55','56','57','58','59','5A','5B','5C','5D','5E','5F',
'60','61','62','63','64','65','66','67','68','69','6A','6B','6C','6D','6E','6F',
'70','71','72','73','74','75','76','77','78','79','7A','7B','7C','7D','7E','7F'
)) ENGINE=MergeTree() order by x;
CREATE TABLE t
(
`x` Enum8('00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '0A', '0B', '0C', '0D', '0E', '0F', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '1A', '1B', '1C', '1D', '1E', '1F', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '2A', '2B', '2C', '2D', '2E', '2F', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '3A', '3B', '3C', '3D', '3E', '3F', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '4A', '4B', '4C', '4D', '4E', '4F', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '5A', '5B', '5C', '5D', '5E', '5F', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '6A', '6B', '6C', '6D', '6E', '6F', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '7A', '7B', '7C', '7D', '7E', '7F')
)
ENGINE = MergeTree
ORDER BY x
Query id: e5d79db3-5f8f-4030-b6ed-f85238461a73
0 rows in set. Elapsed: 0.006 sec.
Received exception from server (version 22.4.1):
Code: 69. DB::Exception: Received from localhost:9000. DB::Exception: Value 128 for element '7F' exceeds range of Enum8. (ARGUMENT_OUT_OF_BOUND)
```
**Expected behavior**
Not clear - should be developed in this issue.
**Additional context**
issue #36101
| https://github.com/ClickHouse/ClickHouse/issues/36307 | https://github.com/ClickHouse/ClickHouse/pull/36352 | 894b1b163e982c6929ab451467f6e253e7e3648b | f6a7b6c2a122742ae060762323b431d4aa00b5d6 | "2022-04-15T20:35:30Z" | c++ | "2022-04-20T03:34:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,304 | [".github/workflows/tags_stable.yml", "docker/server/Dockerfile.alpine", "docker/server/Dockerfile.ubuntu", "utils/list-versions/update-docker-version.sh"] | Dockerfile in the repository is not updated on releases. | See `ARG VERSION=22.1.1.*`. | https://github.com/ClickHouse/ClickHouse/issues/36304 | https://github.com/ClickHouse/ClickHouse/pull/41256 | 636994fab8df402fad4cadc0e0b28f8a0f892413 | f47a44ef021b76f142065705a8c5fea367d32122 | "2022-04-15T18:36:05Z" | c++ | "2022-09-14T10:27:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,303 | ["src/Processors/Formats/Impl/JSONEachRowRowInputFormat.cpp", "src/Processors/Formats/Impl/JSONEachRowRowOutputFormat.cpp", "tests/queries/0_stateless/02187_async_inserts_all_formats.reference", "tests/queries/0_stateless/02267_jsonlines_ndjson_format.reference", "tests/queries/0_stateless/02267_jsonlines_ndjson_format.sql"] | Add aliases `JSONLines`, `NDJSON` | To `JSONEachRow` | https://github.com/ClickHouse/ClickHouse/issues/36303 | https://github.com/ClickHouse/ClickHouse/pull/36327 | 143b67a79049120520142cb389ea5e6e61a6c176 | e8575f5f357b282efb93b96e6a0c2c11eda02998 | "2022-04-15T18:28:35Z" | c++ | "2022-04-22T04:08:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,261 | ["src/Dictionaries/DictionaryStructure.h", "tests/queries/0_stateless/02811_ip_dict_attribute.reference", "tests/queries/0_stateless/02811_ip_dict_attribute.sql"] | support IPv6 as a dictionary attribute | ```sql
create table src ( id UInt64, ip6 IPv6) Engine=Memory;
insert into src values(1, 'fe80::9801:43ff:fe1f:7690');
CREATE DICTIONARY dict
(
id UInt64,
ip6 IPv6
)
PRIMARY KEY id
LAYOUT(HASHED())
SOURCE (CLICKHOUSE ( table src))
lifetime ( 10);
(version 22.4.1):
DB::Exception: Unknown type IPv6 for dictionary attribute. (UNKNOWN_TYPE)
```
WA: String
```sql
CREATE DICTIONARY dict
(
id UInt64,
ip6 String
)
PRIMARY KEY id
LAYOUT(HASHED())
SOURCE (CLICKHOUSE ( table src))
lifetime ( 10);
SELECT dictGet('dict', 'ip6', toUInt64(1))
ββdictGet('dict', 'ip6', toUInt64(1))ββ
β fe80::9801:43ff:fe1f:7690 β
βββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/36261 | https://github.com/ClickHouse/ClickHouse/pull/51756 | f7d89309064f4c18aedfd7abcf3ee077c308dd76 | 9448d42aea6c5befae09cc923570fd6575a6d6f8 | "2022-04-14T14:16:12Z" | c++ | "2023-07-27T19:59:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,123 | ["src/Processors/Transforms/PostgreSQLSource.cpp"] | Segmentation fault after external dictionary loading with PostgreSQL12.5 | ClickHouse: 22.3.3.44
CentOS Linux release 7.8.2003 (Core)
PostgreSQL: 12.5
ClickHouse crashes while creating an external dictionary with PostgreSQL 12.5 as a source. Client returns Unexpected EOF while reading bytes message after query finish.
```
CREATE DICTIONARY xxx_dict (
id String,
x1 String,
x2 String,
x3_time Datetime,
x4_by String,
x5_time Datetime,
x7_time Datetime,
x8 String,
x9 Array(String)
)PRIMARY KEY id
SOURCE(POSTGRESQL(
port 5432
host 'x.x.x.x'
user 'xxx'
password 'xxx'
db 'xxx'
table 'xx_table'
invalidate_query 'SELECT x3_time,x5_time FROM xx_table order by x3_time desc,x5_time desc limit 1'))
LAYOUT(COMPLEX_KEY_HASHED())
LIFETIME(MIN 100 MAX 120);
```
Crash logs from /var/log/clickhouse-server/clickhouse-server.err.log
```
2022.04.11 08:55:05.679971 [ 23405 ] {} <Fatal> BaseDaemon: ########################################
2022.04.11 08:55:05.680043 [ 23405 ] {} <Fatal> BaseDaemon: (version 22.3.3.44 (official build), build id: F9D3C2B8666BEF5D) (from thread 5319) (no query) Received signal Segmentation fault (11)
2022.04.11 08:55:05.680095 [ 23405 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2022.04.11 08:55:05.680205 [ 23405 ] {} <Fatal> BaseDaemon: Stack trace: 0xb34e02b 0x168fc2b5 0x168fbe7a 0x16b53782 0x1691c603 0x1691005e 0x1690f880 0x1692112e 0x1692148c 0x12916eab 0x12965bd7 0x12965645 0x1297b4e8 0x157d6240 0x157d5b19 0x157d7b97 0xb418757 0xb41c2dd 0x7fb704847ea5 0x7fb7045708dd
2022.04.11 08:55:05.680297 [ 23405 ] {} <Fatal> BaseDaemon: 2. DB::PostgreSQLSource<pqxx::transaction<(pqxx::isolation_level)0, (pqxx::write_policy)0> >::generate() @ 0xb34e02b in /usr/bin/clickhouse
2022.04.11 08:55:05.680334 [ 23405 ] {} <Fatal> BaseDaemon: 3. DB::ISource::tryGenerate() @ 0x168fc2b5 in /usr/bin/clickhouse
2022.04.11 08:55:05.680357 [ 23405 ] {} <Fatal> BaseDaemon: 4. DB::ISource::work() @ 0x168fbe7a in /usr/bin/clickhouse
2022.04.11 08:55:05.680381 [ 23405 ] {} <Fatal> BaseDaemon: 5. DB::SourceWithProgress::work() @ 0x16b53782 in /usr/bin/clickhouse
2022.04.11 08:55:05.680404 [ 23405 ] {} <Fatal> BaseDaemon: 6. DB::ExecutionThreadContext::executeTask() @ 0x1691c603 in /usr/bin/clickhouse
2022.04.11 08:55:05.680445 [ 23405 ] {} <Fatal> BaseDaemon: 7. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1691005e in /usr/bin/clickhouse
2022.04.11 08:55:05.680472 [ 23405 ] {} <Fatal> BaseDaemon: 8. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x1690f880 in /usr/bin/clickhouse
2022.04.11 08:55:05.680497 [ 23405 ] {} <Fatal> BaseDaemon: 9. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x1692112e in /usr/bin/clickhouse
2022.04.11 08:55:05.680520 [ 23405 ] {} <Fatal> BaseDaemon: 10. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x1692148c in /usr/bin/clickhouse
2022.04.11 08:55:05.680574 [ 23405 ] {} <Fatal> BaseDaemon: 11. DB::readInvalidateQuery(DB::QueryPipeline) @ 0x12916eab in /usr/bin/clickhouse
2022.04.11 08:55:05.680608 [ 23405 ] {} <Fatal> BaseDaemon: 12. DB::PostgreSQLDictionarySource::doInvalidateQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x12965bd7 in /usr/bin/clickhouse
2022.04.11 08:55:05.680631 [ 23405 ] {} <Fatal> BaseDaemon: 13. DB::PostgreSQLDictionarySource::isModified() const @ 0x12965645 in /usr/bin/clickhouse
2022.04.11 08:55:05.680670 [ 23405 ] {} <Fatal> BaseDaemon: 14. DB::IDictionary::isModified() const @ 0x1297b4e8 in /usr/bin/clickhouse
2022.04.11 08:55:05.680696 [ 23405 ] {} <Fatal> BaseDaemon: 15. DB::ExternalLoader::LoadingDispatcher::reloadOutdated() @ 0x157d6240 in /usr/bin/clickhouse
2022.04.11 08:55:05.680717 [ 23405 ] {} <Fatal> BaseDaemon: 16. DB::ExternalLoader::PeriodicUpdater::doPeriodicUpdates() @ 0x157d5b19 in /usr/bin/clickhouse
2022.04.11 08:55:05.680761 [ 23405 ] {} <Fatal> BaseDaemon: 17. ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::ExternalLoader::PeriodicUpdater::*)(), DB::ExternalLoader::PeriodicUpdater*>(void (DB::ExternalLoader::PeriodicUpdater::*&&)(), DB::ExternalLoader::PeriodicUpdater*&&)::'lambda'()::operator()() @ 0x157d7b97 in /usr/bin/clickhouse
2022.04.11 08:55:05.680798 [ 23405 ] {} <Fatal> BaseDaemon: 18. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb418757 in /usr/bin/clickhouse
2022.04.11 08:55:05.680822 [ 23405 ] {} <Fatal> BaseDaemon: 19. ? @ 0xb41c2dd in /usr/bin/clickhouse
2022.04.11 08:55:05.680851 [ 23405 ] {} <Fatal> BaseDaemon: 20. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
2022.04.11 08:55:05.680881 [ 23405 ] {} <Fatal> BaseDaemon: 21. clone @ 0xfe8dd in /usr/lib64/libc-2.17.so
2022.04.11 08:55:06.066945 [ 23405 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: FD20C2FC24F8B8996C15BF97FA841B03. There is no information about the reference checksum.
```
PostgreSQL logs
```
2022-04-11 16:59:00.916 CST [421] LOG: unexpected EOF on client connection with an open transaction
```
I have looked at these issues but no solution.save the child, thank you.
https://github.com/ClickHouse/ClickHouse/issues/32991
https://github.com/ClickHouse/ClickHouse/issues/36030
| https://github.com/ClickHouse/ClickHouse/issues/36123 | https://github.com/ClickHouse/ClickHouse/pull/38190 | 6f148679efd8b9185b8ae70320ba7dcd00eeed1d | 1d7cf28cabdc7ad62d0079582e763d630f3c27e1 | "2022-04-11T09:15:52Z" | c++ | "2022-06-18T12:18:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,114 | ["src/Common/ThreadPool.cpp"] | Reset thread names in thread pool | **Describe the issue**
We observe many obsolete names of idle threads like `Formatter`.
**Solution**
Reset thread name after a job is done in thread pool. | https://github.com/ClickHouse/ClickHouse/issues/36114 | https://github.com/ClickHouse/ClickHouse/pull/36115 | fcb83a12ff28cc79c12dc8441ea1e5884ae85b58 | 6a165787a6b653ef764e9d1c20beb599541dfb05 | "2022-04-10T23:00:12Z" | c++ | "2022-04-13T16:10:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,111 | ["src/Client/ClientBase.cpp", "tests/queries/0_stateless/01293_client_interactive_vertical_multiline.expect", "tests/queries/0_stateless/01293_client_interactive_vertical_singleline.expect"] | Supporting \G and semicolon after query in clickhouse-client to enable Vertical format | Currently executing queries like:
```
SELECT * FROM system.tables LIMIT 10;\G
SELECT * FROM system.tables LIMIT 10\G;
```
Ends up with a syntax error.
From the documentation, it should not be an error:
> You can specify \G instead of or after the semicolon.
| https://github.com/ClickHouse/ClickHouse/issues/36111 | https://github.com/ClickHouse/ClickHouse/pull/36130 | 6b6671e89f75f3c448b51d2f9eab8b369f8e7fef | d9ce08915a05b6131cb171606a64a68fd2da6b60 | "2022-04-10T21:14:09Z" | c++ | "2022-04-13T08:39:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,071 | ["src/Processors/QueryPlan/Optimizations/limitPushDown.cpp", "tests/queries/0_stateless/02265_limit_push_down_over_window_functions_bug.reference", "tests/queries/0_stateless/02265_limit_push_down_over_window_functions_bug.sql"] | Window functions should not be affected by LIMIT | ```
SELECT
number,
leadInFrame(number) OVER w AS W
FROM numbers(10)
WINDOW w AS (ORDER BY number ASC Rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
LIMIT 3;
ββnumberββ¬βWββ
β 0 β 1 β
β 1 β 2 β
β 2 β 0 β <-- W should be 3!
ββββββββββ΄ββββ
```
In contrast: https://www.db-fiddle.com/f/hNNECnp7YC3E2zBKBaxN6T/1 | https://github.com/ClickHouse/ClickHouse/issues/36071 | https://github.com/ClickHouse/ClickHouse/pull/36075 | d9ce08915a05b6131cb171606a64a68fd2da6b60 | 362fcfd2b8f50edcf4fc4b43c15ac5ff7c2d964f | "2022-04-08T13:17:14Z" | c++ | "2022-04-13T09:57:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,043 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/ExpressionAnalyzer.h", "tests/queries/0_stateless/02354_read_in_order_prewhere.reference", "tests/queries/0_stateless/02354_read_in_order_prewhere.sql"] | Not found column in block exception | Hello,
We upgraded our Clickhouse version to 22.3.3.44. After ugrade, we get strange exception in the queries. If I include 'n' word to where query, it gives an exception.
I have a sample query like below:
```
SELECT ts FROM event WHERE ((appkey='n') AND (ecode = 'n')) ORDER BY ts ASC limit 1;
```
It gives below exception:
```
Received exception from server (version 22.3.3):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column ecode in block. There are only columns: appkey, ts, equals(ecode, 'n'). (NOT_FOUND_COLUMN_IN_BLOCK)
```
If I change 'n' with 'a' in the where query, it doesn't give the exception:
```
SELECT ts FROM event WHERE ((appkey='a') AND (ecode = 'a')) ORDER BY ts ASC limit 1;
```
Result:
```
0 rows in set. Elapsed: 0.047 sec.
```
**Affected Version**
22.3.3.44
**show create table event**
```
CREATE TABLE event
(
`appkey` String,
`ecode` String,
`userid` String,
`iid` String,
`exid` String,
`did` String,
`sid` String,
`pid` Int32,
`fid` String,
`appVersion` String,
`platform` UInt8,
`revenue` Float64,
`ts` DateTime,
`ea` String,
`eb` String,
`ec` Int32,
`ed` Int32,
`ee` String,
`ef` String,
`eg` String,
`eh` String,
`ei` String,
`ej` String,
`ek` String,
`el` Float64,
`em` String,
`en` String,
`eo` Int32,
`ep` Float64,
`eq` Float64,
`er` Float64,
`es` Float64,
`et` String,
`eu` String,
`ev` String,
`fc` Float64,
`fd` Float64,
`fe` Int32,
`ff` Int32,
`fg` UInt8,
`fh` UInt8,
`fi` DateTime,
`fj` DateTime,
`fk` String,
`fl` String,
`fm` Int32,
`fn` Int32,
`fo` String,
`fp` String,
`fq` Int64,
`fr` Int64,
`fs` String,
`ft` Float64,
`fu` Int64,
`fv` String,
`fw` Float64,
`fx` String,
`fy` Int32,
`fz` Int32,
`ga` String,
`gb` Int64,
`gc` Int64,
`data` String,
`txt` String,
`cmp` String,
`piid` String,
`uid` String,
`gd` String,
`ge` String,
`rv` Float64
)
ENGINE = MergeTree
PARTITION BY toYYYYMMDD(ts)
ORDER BY (appkey, ecode, ts)
SETTINGS index_granularity = 8192
``` | https://github.com/ClickHouse/ClickHouse/issues/36043 | https://github.com/ClickHouse/ClickHouse/pull/39157 | c669721d46519a51c9e61c40bee8bd1fa8a7995a | f6a82a6a5345fd36cd5198e09c4ad8398aa4d310 | "2022-04-07T15:23:16Z" | c++ | "2022-07-15T14:37:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,038 | ["docker/test/fuzzer/run-fuzzer.sh", "tests/queries/0_stateless/02504_parse_datetime_best_effort_calebeaires.reference", "tests/queries/0_stateless/02504_parse_datetime_best_effort_calebeaires.sql"] | Add support to dates before 1970 to parseDateTime64BestEffort | This query does not give the corrected date when using parse date time feature. Do you have plans to give support on dates between 1970 to 'Best Effort Of'?
```sql
insert into
`bucket_155`.`my_table` (
`col_date`,
`col_date32`,
`col_datetime`,
`col_datetime32`,
`col_datetime64`
)
values
(
parseDateTime64BestEffort('1969-01-01'),
'1969-01-01',
parseDateTime64BestEffort('1969-01-01 10:42:00'),
parseDateTime64BestEffort('1969-01-01 10:42:00'),
parseDateTime64BestEffort('1969-01-01 10:42:00')
)
```
| https://github.com/ClickHouse/ClickHouse/issues/36038 | https://github.com/ClickHouse/ClickHouse/pull/44339 | 875797ee63a3b6ed7f9523c5ccad3e07a5021872 | 7f0800fbd1cfaed200db7213f613d52fe5fbc7b5 | "2022-04-07T14:08:37Z" | c++ | "2022-12-29T12:45:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,037 | ["src/Interpreters/SelectQueryOptions.h", "src/Processors/QueryPlan/DistributedCreateLocalPlan.cpp", "src/Storages/ProjectionsDescription.cpp", "tests/queries/0_stateless/02315_optimize_monotonous_functions_in_order_by_remote.reference", "tests/queries/0_stateless/02315_optimize_monotonous_functions_in_order_by_remote.sql"] | DB::Exception: Cannot find column `toDateTime(number)` in source stream, there are only columns: [number]. (THERE_IS_NO_COLUMN) | **How to check**
```
CREATE TABLE test_local on cluster '{cluster}' (number UInt64) ENGINE = MergeTree() ORDER BY number;
CREATE TABLE test_distruted (number UInt64) ENGINE = Distributed('{cluster}', currentDatabase(), test_local);
INSERT INTO test_local SELECT number from system.numbers limit 3;
select number from test_distruted order by toDateTime(number) DESC;
```
Without distributed table (only merge tree) it work's correctly
**Versions**
21.12.4 - ok
22.2.3 - ok
22.3.2 - not ok | https://github.com/ClickHouse/ClickHouse/issues/36037 | https://github.com/ClickHouse/ClickHouse/pull/37724 | 77c06447d56fbcb9483047435dc0e03d652f409e | a0020cb55c856222404894ca6f571e3be570afe0 | "2022-04-07T13:39:51Z" | c++ | "2022-06-01T08:54:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 36,025 | ["cmake/cpu_features.cmake"] | Power build is throwing libcxx error and binaries from ci pipeline also throws the error. | Hello there,
As part of a system porting requirement we need to port **clickhouse** along with some other datasources to **PPC64LE**.
We were port clickhouse for a long time now as we ran into several issues, it got delayed. We are now stuck in a problem where we are able to build it from source but not able to run it.
We are using **master** branch.
compiler **CLANG-LLVM-13**
**Operating system**
> PPC64le ubuntu focal
inside docker container with base image ubuntu:focal itself
**Dockerfile**
> FROM ubuntu:focal
> ARG DEBIAN_FRONTEND=noninteractive
> RUN apt update
> RUN apt install -y git cmake ninja-build python
> RUN git clone --recursive https://github.com/ClickHouse/ClickHouse.git
> RUN apt update -y && apt install -y build-essential gcc wget xz-utils
> RUN wget https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.1/clang+llvm-13.0.1-powerpc64le-linux-ubuntu-18.04.5.tar.xz
> RUN tar -xf clang+llvm-13.0.1-powerpc64le-linux-ubuntu-18.04.5.tar.xz
> RUN mv clang+llvm-13.0.1-powerpc64le-linux-ubuntu-18.04.5 clang-13-ppc64le && \
> cd clang-13-ppc64le && \
> cp -R * /usr/local/
> RUN apt install -y libncurses5
> RUN clang --version
> RUN export CMAKE_PREFIX_PATH=/clang-13-ppc64le/bin/
>
> WORKDIR ClickHouse
>
> RUN apt install -y ninja-build
>
> RUN mkdir build-ppc64le
> RUN CC=/clang-13-ppc64le/bin/clang \
> CXX=/clang-13-ppc64le/bin/clang++ \
> cmake . -Bbuild-ppc64le \
> -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-ppc64le.cmake \
> -DCMAKE_C_COMPILER=/clang-13-ppc64le/bin/clang \
> -DCMAKE_CXX_COMPILER=/clang-13-ppc64le/bin/clang++ && \
> ninja -C build-ppc64le
>
> RUN cd build-ppc64le && \
> echo ' set(CMAKE_INSTALL_PREFIX "/usr") ' | cat - cmake_install.cmake > temp && mv temp cmake_install.cmake && \
> cat cmake_install.cmake && \
> cmake -P \
> cmake_install.cmake -DCMAKE_INSTALL_PREFIX=/usr
>
>
> CMD clickhouse start
### Error ###
The image is built successfully but when doing docker run ie.
When trying to run **clickhouse start**, throwing an
> Illegal instruction (core dumped)
while debugging with **gdb** got the following trace.
> Program received signal SIGILL, Illegal instruction.
0x00000000275a71b8 in _GLOBAL__sub_I_db_impl_open.cc () at ../contrib/libcxx/include/vector:650
650 ../contrib/libcxx/include/vector: No such file or directory.
tried to copy **libxx** files (from clang libcxx using _cp -R /clang-13-ppc64le/include/c++/v1/* ClickHouse/contrib/libcxx_) to **contrib/include** but nothing worked.
#### We tried dowloading [binaries (from S3) ](https://s3.amazonaws.com/clickhouse-builds/22.4/71fb04ea4ad17432baba7934a10217f5f6a1bde3/binary_ppc64le/clickhouse)built on [CI Pipeline](https://github.com/ClickHouse/ClickHouse/runs/5654128277?check_suite_focus=true) and running it threw the same error.
### Any help will be much helpful and appreciated.
_ Thank you _
| https://github.com/ClickHouse/ClickHouse/issues/36025 | https://github.com/ClickHouse/ClickHouse/pull/36529 | 76839016891fc51381434d7a543b2de336989e2d | 2e88b8b4b8edb0466565f5f90068c9ce4e3e0a62 | "2022-04-07T09:34:16Z" | c++ | "2022-04-26T05:43:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,952 | ["src/Access/SettingsProfilesCache.cpp", "src/Access/SettingsProfilesInfo.cpp", "src/Access/SettingsProfilesInfo.h", "tests/integration/test_settings_profile/test.py"] | Code: 1001. DB::Exception: std::out_of_range: unordered_map::at: key not found | Hi
When I used clickhouse-client to connect to the database, the database reported the following error!
<Error> TCPHandler: Code: 1001. DB::Exception: std::out_of_range: unordered_map::at: key not found. (STD_EXCEPTION), Stack trace (when copying this message, always i
nclude the lines below):
0. std::logic_error::logic_error(char const*) @ 0x18dd2a6d in ?
1. ? @ 0xa50afc9 in /usr/bin/clickhouse
2. ? @ 0xa50af80 in /usr/bin/clickhouse
3. DB::SettingsProfilesInfo::getProfileNames() const @ 0x1473a1b1 in /usr/bin/clickhouse
4. DB::SessionLog::addLoginSuccess(StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag> const&, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<c
har> > >, DB::Context const&) @ 0x14739ab6 in /usr/bin/clickhouse
5. DB::Session::makeQueryContextImpl(DB::ClientInfo const*, DB::ClientInfo*) const @ 0x1472f947 in /usr/bin/clickhouse
6. DB::TCPHandler::receiveQuery() @ 0x1544e3ba in /usr/bin/clickhouse
7. DB::TCPHandler::receivePacket() @ 0x154477d6 in /usr/bin/clickhouse
8. DB::TCPHandler::runImpl() @ 0x15440dfe in /usr/bin/clickhouse
9. DB::TCPHandler::run() @ 0x15450e79 in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x164b264f in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x164b4aa1 in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x16671e49 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x1666f1a0 in /usr/bin/clickhouse
14. ? @ 0x7fb387437609 in ?
15. __clone @ 0x7fb38735c163 in ?
Installed latest version clickhouse-server and clickhouse-client.
There was no such error in the version 20.9.2.20 | https://github.com/ClickHouse/ClickHouse/issues/35952 | https://github.com/ClickHouse/ClickHouse/pull/42641 | 4a145b6c96b79473b1a8547cf6bae3b038cde1db | 6f564c59bd17f6a9672c9925219c1923207c665c | "2022-04-05T04:40:56Z" | c++ | "2022-11-30T12:01:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,902 | ["src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/02380_insert_mv_race.reference", "tests/queries/0_stateless/02380_insert_mv_race.sh"] | Block structure mismatch in QueryPipeline stream: different number of columns | https://s3.amazonaws.com/clickhouse-test-reports/0/91453fe4d65981fa0cf5ee72ba011014d04b3510/stress_test__debug__actions_.html
```
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 15:56:30.878986 [ 2031 ] {6bd4a5d9-b8e1-4082-b2ba-39c6b9d0c079} <Fatal> : Logical error: 'Block structure mismatch in QueryPipeline stream: different number of columns:
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.209124 [ 3737 ] {} <Fatal> BaseDaemon: ########################################
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.217716 [ 3737 ] {} <Fatal> BaseDaemon: (version 22.4.1.1176 (official build), build id: E96B13DFBEF40B95) (from thread 2031) (query_id: 6bd4a5d9-b8e1-4082-b2ba-39c6b9d0c079) (query: INSERT INTO test.visits_null
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.224710 [ 3737 ] {} <Fatal> BaseDaemon:
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.233583 [ 3737 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f6b9681f03b 0x7f6b967fe859 0x17a91d34 0x17a91e42 0x25b395b1 0x25b376d9 0x25b375a0 0x25da15cf 0x25dd4583 0x26b5d43f 0x26fdc866 0x26fd9f64 0x27ee7fab 0x27ef65a5 0x2c78e1f9 0x2c78ea08 0x2c9d4ed4 0x2c9d19da 0x2c9d07bc 0x7f6b969d6609 0x7f6b968fb163
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.352606 [ 3737 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7f6b9681f03b in ?
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.354055 [ 3737 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7f6b967fe859 in ?
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:03.726376 [ 3737 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/Exception.cpp:52: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x17a91d34 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:06.766024 [ 3737 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Common/Exception.cpp:59: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x17a91e42 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:07.507648 [ 3737 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Core/Block.cpp:35: void DB::onError<void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x25b395b1 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:08.407440 [ 3737 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Core/Block.cpp:94: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, bool) @ 0x25b376d9 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:09.102950 [ 3737 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Core/Block.cpp:635: DB::assertBlocksHaveEqualStructure(DB::Block const&, DB::Block const&, std::__1::basic_string_view<char, std::__1::char_traits<char> >) @ 0x25b375a0 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:09.879589 [ 3737 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/QueryPipeline/Pipe.cpp:682: DB::Pipe::addChains(std::__1::vector<DB::Chain, std::__1::allocator<DB::Chain> >) @ 0x25da15cf in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:10.398952 [ 3737 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/QueryPipeline/QueryPipelineBuilder.cpp:139: DB::QueryPipelineBuilder::addChains(std::__1::vector<DB::Chain, std::__1::allocator<DB::Chain> >) @ 0x25dd4583 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:11.622474 [ 3737 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:450: DB::InterpreterInsertQuery::execute() @ 0x26b5d43f in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:12.953785 [ 3737 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Interpreters/executeQuery.cpp:676: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x26fdc866 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:14.370481 [ 3737 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Interpreters/executeQuery.cpp:987: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x26fd9f64 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:15.350690 [ 3737 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Server/TCPHandler.cpp:332: DB::TCPHandler::runImpl() @ 0x27ee7fab in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:16.186887 [ 3737 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Server/TCPHandler.cpp:1767: DB::TCPHandler::run() @ 0x27ef65a5 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:16.537775 [ 3737 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x2c78e1f9 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:16.772403 [ 3737 ] {} <Fatal> BaseDaemon: 19. ./build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x2c78ea08 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:17.032703 [ 3737 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x2c9d4ed4 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:17.223502 [ 3737 ] {} <Fatal> BaseDaemon: 21. ./build_docker/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x2c9d19da in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:17.710916 [ 3737 ] {} <Fatal> BaseDaemon: 22. ./build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x2c9d07bc in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:17.721915 [ 3737 ] {} <Fatal> BaseDaemon: 23. ? @ 0x7f6b969d6609 in ?
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:17.729237 [ 3737 ] {} <Fatal> BaseDaemon: 24. __clone @ 0x7f6b968fb163 in ?
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:20.694104 [ 3737 ] {} <Fatal> BaseDaemon: Checksum of the binary: 8B4FDDB0D9A351DB908AB0E82708066D, integrity check passed.
/var/log/clickhouse-server/clickhouse-server.log.2:2022.04.03 16:02:23.579745 [ 646 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | https://github.com/ClickHouse/ClickHouse/issues/35902 | https://github.com/ClickHouse/ClickHouse/pull/39477 | c983f14ed143a964c3daac12a6867d072027da1e | 0005c37acca55253a5ec7d762ff228a33ae76745 | "2022-04-04T10:33:00Z" | c++ | "2022-08-05T15:19:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,897 | ["src/Interpreters/UserDefinedExecutableFunctionFactory.cpp", "tests/integration/test_executable_user_defined_function/functions/test_function_config.xml", "tests/integration/test_executable_user_defined_function/test.py", "tests/integration/test_executable_user_defined_function/user_scripts/input_nullable.py"] | user-defined-functions null value input | When I'm executing a UDF function, a `null value` input to the script is received as `0`(for numbers) and `empty string` (for strings).
The UDF definition like this
```
<function>
<type>executable</type>
<name>some_function_name</name>
<return_name>result</return_name>
<return_type>String</return_type>
<argument>
<type>Nullable(Int32)</type>
<name>argument_1</name>
</argument>
<format>JSONEachRow</format>
<command>some_function.py</command>
</function>
```
And the script file link this (Here whatever input I got I'm logging it to a file.)
```
#!/usr/bin/python3
import sys
import json
def write_to_file(line):
with open("/tmp/input.log",'a') as fw:
fw.write(str(line))
if __name__ == '__main__':
for line in sys.stdin:
# Writing log
write_to_file(line)
result = {"result":str(line)}
print(json.dumps(result))
sys.stdout.flush()
```
I got the below results
```
clickhouse-client --query="select value, some_function_name(value) from (SELECT 1 as value UNION ALL SELECT NULL UNION ALL SELECT 0) FORMAT Vertical" --input_format_tsv_empty_as_default=1
Row 1:
ββββββ
value: 1
some_function_name(value): {"argument_1":1}
Row 2:
ββββββ
value: α΄Ία΅α΄Έα΄Έ
some_function_name(value): α΄Ία΅α΄Έα΄Έ
Row 3:
ββββββ
value: 0
some_function_name(value): {"argument_1":0}
```
Here, the result for `Row 2` should be `{"argument_1":NULL}` but it's `NULL`.
And the log I got for inputs as below
```
{"argument_1":1}
{"argument_1":0}
{"argument_1":0}
```
So, is there any option to get the `NULL` value as `NULL` to the input script?
| https://github.com/ClickHouse/ClickHouse/issues/35897 | https://github.com/ClickHouse/ClickHouse/pull/37711 | 1d9c8351a0703231938fcb4b2d90cea192f6f439 | cb931353265bd9f672953c33875cf1a54f7c8a2b | "2022-04-04T05:35:51Z" | c++ | "2022-06-02T09:05:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,891 | ["docs/en/sql-reference/data-types/enum.md", "docs/ru/sql-reference/data-types/enum.md", "src/DataTypes/DataTypeEnum.cpp", "tests/queries/0_stateless/00757_enum_defaults.reference", "tests/queries/0_stateless/00757_enum_defaults.sql"] | Auto assign numbers for Enum elements | ```
:) CREATE TEMPORARY TABLE test (x enum('a', 'b'))
CREATE TEMPORARY TABLE test
(
`x` enum('a', 'b')
)
Query id: 081d03e8-9945-4b0c-a906-4477f4b45d64
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 22.4.1):
Code: 223. DB::Exception: Received from localhost:9000. DB::Exception: Elements of Enum data type must be of form: 'name' = number, where name is string literal and number is an integer. (UNEXPECTED_AST_STRUCTURE)
``` | https://github.com/ClickHouse/ClickHouse/issues/35891 | https://github.com/ClickHouse/ClickHouse/pull/36352 | 894b1b163e982c6929ab451467f6e253e7e3648b | f6a7b6c2a122742ae060762323b431d4aa00b5d6 | "2022-04-03T22:14:10Z" | c++ | "2022-04-20T03:34:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,890 | ["src/DataTypes/DataTypeFactory.cpp", "src/Parsers/ParserDataType.cpp", "tests/queries/0_stateless/02271_int_sql_compatibility.reference", "tests/queries/0_stateless/02271_int_sql_compatibility.sql"] | UNSIGNED modifier does not work with unused parameters of INT | **Describe the unexpected behaviour**
```
:) CREATE TEMPORARY TABLE test (x INT UNSIGNED)
CREATE TEMPORARY TABLE test
(
`x` INT UNSIGNED
)
Query id: f4fa2663-3a8a-445d-a9cb-de759ac191de
Ok.
0 rows in set. Elapsed: 0.002 sec.
:) DROP TABLE test
DROP TABLE test
Query id: 32fee2e0-78c4-4fed-8c34-78c5aaab2781
Ok.
0 rows in set. Elapsed: 0.001 sec.
ubuntu-4gb-nbg1-2 :) CREATE TEMPORARY TABLE test (x INT(11) UNSIGNED)
Syntax error: failed at position 40 ('UNSIGNED')
``` | https://github.com/ClickHouse/ClickHouse/issues/35890 | https://github.com/ClickHouse/ClickHouse/pull/36423 | 2e187e4b97665f07ec1d255b34f9063847eeca46 | 5b5dd4fa4673d0137e56c8bbe3af28444866cd54 | "2022-04-03T22:12:16Z" | c++ | "2022-04-22T12:27:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,889 | ["src/Interpreters/InterpreterCreateQuery.cpp", "src/Parsers/ParserCreateQuery.h", "tests/queries/0_stateless/01269_create_with_null.reference", "tests/queries/0_stateless/01269_create_with_null.sql", "tests/queries/0_stateless/02302_column_decl_null_before_defaul_value.reference", "tests/queries/0_stateless/02302_column_decl_null_before_defaul_value.sql"] | Wrong order of NOT NULL data type modifier and DEFAULT. | **Describe the unexpected behaviour**
```
:) CREATE TEMPORARY TABLE test (x INT NOT NULL DEFAULT 1)
Syntax error: failed at position 52 ('DEFAULT') (line 3, col 22):
CREATE TEMPORARY TABLE test
(
`x` INT NOT NULL DEFAULT 1
^
)
Expected one of: COMMENT, CODEC, TTL, token, Comma, ClosingRoundBracket
```
But this query succeeded:
```
:) CREATE TEMPORARY TABLE test (x INT DEFAULT 1 NOT NULL)
CREATE TEMPORARY TABLE test
(
`x` INT NOT NULL DEFAULT 1
)
``` | https://github.com/ClickHouse/ClickHouse/issues/35889 | https://github.com/ClickHouse/ClickHouse/pull/37337 | 2ff747785ed6eb6da8911975f2673f976db48dd6 | 04e2737a572fbc9e9baf9a5b735a906458fd1e2b | "2022-04-03T22:09:55Z" | c++ | "2022-05-24T19:16:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,841 | ["src/Common/ProfileEvents.cpp", "src/Common/ShellCommand.cpp", "src/Interpreters/UserDefinedExecutableFunctionFactory.cpp", "tests/queries/0_stateless/02252_executable_user_defined_function_short_circuit.reference", "tests/queries/0_stateless/02252_executable_user_defined_function_short_circuit.sql"] | Assume User Defined Function as heavy-to-process in Short Circuit Processing | **Use case**
Please look at the next request:
```
SELECT * FROM numbers(1000000) WHERE number % 10000 == 0 and extremely_heavy_udf(number) == 'ok'
```
Now as far as all the functions are "eager" the heavy UDF will drastically reduce performance of the request.
**Describe the solution you'd like**
As far as I understood after a conversation with @alexey-milovidov you have a "Short Circuit Processing" optimization for simple logical predicates (like conjunctive normal form) doing "lazy calculate" for heavy functions.
So it would be nice to assume all UDF functions as heavy-to-process (which is very close to the truth).
| https://github.com/ClickHouse/ClickHouse/issues/35841 | https://github.com/ClickHouse/ClickHouse/pull/35917 | 4479b68980f03a5adc30b315c38793ffef95e7d4 | c3c284e6e6b5160790030da307e8129bef0e5e88 | "2022-04-01T12:43:52Z" | c++ | "2022-04-05T12:05:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,837 | ["docs/en/sql-reference/statements/create/view.md", "src/Core/Settings.h", "src/Interpreters/SystemLog.cpp", "src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/02572_materialized_views_ignore_errors.reference", "tests/queries/0_stateless/02572_materialized_views_ignore_errors.sql", "tests/queries/0_stateless/02572_system_logs_materialized_views_ignore_errors.reference", "tests/queries/0_stateless/02572_system_logs_materialized_views_ignore_errors.sql"] | Bad insert into MV with Engine=URL | It will be great have some retries and drops for INSERTS if you have a material view with Engine=URL and HTTP server is gone.
Currently all INSERTS will be failed, even in the main table, so it should be like soft exception.
Also will be great if ENGINE=Url won't send "Connection: close" in HTTP request :-)
P.S. We have discussed about this feature with @alexey-milovidov :-)
| https://github.com/ClickHouse/ClickHouse/issues/35837 | https://github.com/ClickHouse/ClickHouse/pull/46658 | 65d671b7c72c7b1da23f831faa877565cf34f92c | 575ffbc4653b117e918356c8e60f7748df956643 | "2022-04-01T11:47:18Z" | c++ | "2023-03-09T11:19:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,836 | ["src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp", "src/Storages/MergeTree/MergeTreeSettings.h", "src/Storages/MergeTree/SimpleMergeSelector.cpp", "src/Storages/MergeTree/SimpleMergeSelector.h", "tests/integration/test_merge_tree_optimize_old_parts/__init__.py", "tests/integration/test_merge_tree_optimize_old_parts/configs/zookeeper_config.xml", "tests/integration/test_merge_tree_optimize_old_parts/test.py"] | Automatically optimize on special conditions | Clickhouse has special options "do_not_merge_across_partitions_select_final" for ReplaceMergeTree Engine, which allows make fast select on the "optimized" partitions. Unfortunate you have to run manually (crontab) "OPTIMIZE TABLE ... PARTITION ... FINAL" .
It will be great to have some options which will tell Clickhouse to compact older partitions on some conditions:
i.e.
```
CREATE TABLE select_final (t DateTime, x Int32, string String) ENGINE = ReplacingMergeTree() PARTITION BY toYYYYMMDD(t) ORDER BY (x, t) SETTINGS optimize_partition_after = 3600
```
Clickhouse will check if a partition is older than 3600 seconds and has multiple parts and in such condition will do automatically optimization (compaction)
P.S. we just discussed with @alexey-milovidov about this options :-)
| https://github.com/ClickHouse/ClickHouse/issues/35836 | https://github.com/ClickHouse/ClickHouse/pull/42423 | b5d51e8a8f24578e80f12a22e5c2a6d8549c177e | 9ee7131f678b9c3d73ca67ec00ad4f5599453ade | "2022-04-01T11:29:48Z" | c++ | "2022-10-24T17:41:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,830 | ["src/Interpreters/InterpreterCreateQuery.cpp", "tests/queries/0_stateless/02292_nested_not_flattened_detach.reference", "tests/queries/0_stateless/02292_nested_not_flattened_detach.sql"] | Nested field structure lost when restarting the server | **Describe what's wrong**
We have a nested field in a table.
The table is created using SET flatten_nested = 0.
When the server is restarted the field is no longer nested (only flattened arrays appear).
**Does it reproduce on recent release?**
version 22.2.3.1
**How to reproduce**
Create a table using:
`
SET flatten_nested = 0
`
`
CREATE TABLE test.test
(
id String,
nested_field Nested (nf1 String, nf2 String)
) ENGINE = MergeTree ORDER BY id
`
Recover the table structure using describe:
`
describe test.test
`
`
name type
id String
nested_field Nested(nf1 String, nf2 String)
`
Restart click-house server.
Recover the table structure using describe:
`
name type
id String
nested_field.nf1 Array(String)
nested_field.nf2 Array(String)
`
*Expected behavior**
Recover the initial structure of the table not he flatenned one.
| https://github.com/ClickHouse/ClickHouse/issues/35830 | https://github.com/ClickHouse/ClickHouse/pull/36803 | fd980e6840530db64d96749ff806d27e3a9d47fa | b98ac570901e025fc93b7be4070dc760fd283042 | "2022-04-01T09:11:53Z" | c++ | "2022-05-02T11:58:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,816 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/00980_merge_alter_settings.sql", "tests/queries/0_stateless/00980_zookeeper_merge_tree_alter_settings.sql", "tests/queries/0_stateless/02252_reset_non_existing_setting.reference", "tests/queries/0_stateless/02252_reset_non_existing_setting.sql"] | reset SETTING TTL silently does nothing | ```sql
create table A ( A date) Engine=MergeTree order by A TTL A+interval 1 year;
alter table A reset SETTING TTL;
show create table A;
CREATE TABLE default.A
(
`A` Date
)
ENGINE = MergeTree
ORDER BY A
TTL A + toIntervalYear(1)
SETTINGS index_granularity = 8192
``` | https://github.com/ClickHouse/ClickHouse/issues/35816 | https://github.com/ClickHouse/ClickHouse/pull/35884 | bd89fcafdbc44b4b41f1c7458af5eeedec062774 | 3ccf99c3d7790769a35f3d7ab7d76031cfdf1f9f | "2022-03-31T16:23:16Z" | c++ | "2022-04-04T13:27:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,801 | ["tests/queries/0_stateless/02807_default_date_time_nullable.reference", "tests/queries/0_stateless/02807_default_date_time_nullable.sql"] | Wrong behavior of DateTime default value when nullable | The behavior of default value differs depending of the type of the column for DateTime when it's Nullable or not Nullable :
When the column is Nullable, the default value for datetime '0' is wrong (underflowing ?)
```sql
create table test (
data int,
default Nullable(DateTime) DEFAULT '1970-01-01 00:00:00'
) engine = Memory();
insert into test (data) select 1;
select * from test;
data, default
1 , 2106-02-07 06:28:16
```
But in case of a regular, not Nullable column, then it's fine :
```sql
create table test (
data int,
default DateTime DEFAULT '1970-01-01 00:00:00'
) engine = Memory();
insert into test (data) select 1;
select * from test;
data, default
1 ,1970-01-01 01:00:00
```
However, as you can see, the default value is not `1970-01-01 00:00:00` but `1970-01-01 01:00:00` in the result query.
My guess is the change of behavior the usage of the timezone.
I'm using CH 21.8.14.5 in timezone Europe/Paris | https://github.com/ClickHouse/ClickHouse/issues/35801 | https://github.com/ClickHouse/ClickHouse/pull/51356 | 3a48a7b8727d32b8ccbbf8c27ed12eccee4e2fad | 33d7cca9df0ed9d91c3c8ed3009f92142ce69f9d | "2022-03-31T12:19:14Z" | c++ | "2023-07-08T07:34:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,790 | ["src/DataTypes/ObjectUtils.cpp", "src/DataTypes/ObjectUtils.h", "src/Storages/MergeTree/MergeTreeDataWriter.cpp", "src/Storages/MergeTree/MergeTreeDataWriter.h", "src/Storages/MergeTree/MergeTreeSink.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeSink.cpp", "src/Storages/StorageMemory.cpp", "tests/queries/0_stateless/01825_type_json_partitions.reference", "tests/queries/0_stateless/01825_type_json_partitions.sql"] | JSON field in partitioned table insert error when crossing partition boundaries | Good day everyone! I have stumbled upon an interesting issue with semi-structured data experimental feature .
When inserting into a partitioned MergeTree with JSON column and crossing partition boundaries the error is thrown
`DB::Exception: ColumnObject must be converted to ColumnTuple before use. (LOGICAL_ERROR)`
The issue can be reproduced on ` 22.3.2.1` release
**How to reproduce**
* Which ClickHouse server version to use - 22.3.2.1
* Which interface to use, if matters - does not seem to play any role
* Non-default settings, if any - allow_experimental_object_type enabled
* `CREATE TABLE` statements for all tables involved
```DROP TABLE IF EXISTS jsonTest;
CREATE TABLE jsonTest
(
`data` JSON,
`creationDateUnix` UInt32 CODEC(DoubleDelta, ZSTD(1))
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(toDate(creationDateUnix))
ORDER BY (creationDateUnix);
DROP TABLE IF EXISTS source;
CREATE TABLE source
(
`sourceData` String,
`creationDateUnix` UInt32 CODEC(DoubleDelta, ZSTD(1))
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(toDate(creationDateUnix))
ORDER BY (creationDateUnix);
truncate source;
insert into source (sourceData,creationDateUnix)
select coalesce(actionDetailStr,'{}'), creationDateUnix from statOptJSONstr;
truncate jsonTest;
insert into jsonTest(data, creationDateUnix)
select sourceData,creationDateUnix from source; -- this statement fails with
DB::Exception: ColumnObject must be converted to ColumnTuple before use. (LOGICAL_ERROR) (version 22.3.2.1)
```
I have noticed that if the data belongs to only one partition there is no issue.
For non-partitioned tables issue is not reproducible as well.
following statement can be used for checking my assumption above
```
insert into jsonTest(data, creationDateUnix)
select sourceData, creationDateUnix from source Where toYYYYMMDD(toDate(creationDateUnix)) between 20181130 and 20181201;
```
[Here is my dataset in parquet](https://drive.google.com/file/d/13fmXEcfLi-qGrBTz5UUB6hWOMUcPM6xb/view?usp=sharing)
**Expected behavior**
Expected behavior is successful insertion of data
**Error message and/or stacktrace**
SQL Error [1002]: ClickHouse exception, code: 1002, host: localhost, port: 8123; Code: 49. DB::Exception: ColumnObject must be converted to ColumnTuple before use. (LOGICAL_ERROR) (version 22.3.2.1)
| https://github.com/ClickHouse/ClickHouse/issues/35790 | https://github.com/ClickHouse/ClickHouse/pull/35806 | db75bf6f5d8a421b3f8ba14aed25445ef71db8c6 | d08d4a2437991bf0fae5683217343189ed2fb59b | "2022-03-31T08:18:40Z" | c++ | "2022-04-04T14:16:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,757 | ["docker/test/fuzzer/allow-nullable-key.xml", "docker/test/fuzzer/run-fuzzer.sh", "programs/client/Client.cpp", "programs/client/Client.h", "src/Client/ClientBase.h", "src/Client/QueryFuzzer.cpp", "src/Client/QueryFuzzer.h", "src/Storages/MergeTree/registerStorageMergeTree.cpp"] | Fuzzing of data types in tables | AST-based query fuzzer should randomly modify data types in CREATE TABLE queries, mostly by wrapping in Nullable, Array and LowCardinality. | https://github.com/ClickHouse/ClickHouse/issues/35757 | https://github.com/ClickHouse/ClickHouse/pull/40096 | 324c922121736913549ecce0c7a5d7906fa89076 | 02cdc20d5d2ad98ed9850e111e6fe18af4bf7a55 | "2022-03-30T11:32:01Z" | c++ | "2022-10-05T12:53:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,751 | ["src/Parsers/ExpressionElementParsers.cpp", "tests/queries/0_stateless/02247_fix_extract_parser.reference", "tests/queries/0_stateless/02247_fix_extract_parser.sql"] | Clickhouse cannot parse table definition after update to v22.3 | Clickhouse server crashed after update from `yandex/clickhouse-server:21.11.3.6` to `clickhouse/clickhouse-server:22.3.2.2`
Seems like some problem with parsing metadata file of mv:
```
2022.03.30 09:26:11.300637 [ 1 ] {} <Error> Application: DB::Exception: Syntax error (in file /var/lib/clickhouse/store/990/9909bef6-626f-4868-9909-bef6626f4868/jira_issue.sql): failed at position 5242 (',') (line 113, col 62): , '\\d+')), sprints)) AS first_sprint,
arrayReduce('max', arrayMap(s -> toUInt32OrNull(extract(s, '\\d+')), sprints)) AS last_sprint,
JSONExtractString(. Expected one of: FROM, end of query: Cannot parse definition from metadata file /var/lib/clickhouse/store/990/9909bef6-626f-4868-9909-bef6626f4868/jira_issue.sql
2022.03.30 09:26:19.456468 [ 1 ] {} <Warning> Application: Calculated checksum of the binary: 51010DC62C0638E7259D2BDDE72C485C. There is no information about the reference checksum.
2022.03.30 09:26:19.461712 [ 1 ] {} <Error> CertificateReloader: Cannot obtain modification time for certificate file /etc/clickhouse-server/server.crt, skipping update. errno: 2, strerror: No such file or directory
2022.03.30 09:26:19.461753 [ 1 ] {} <Error> CertificateReloader: Cannot obtain modification time for key file /etc/clickhouse-server/server.key, skipping update. errno: 2, strerror: No such file or directory
2022.03.30 09:26:19.462113 [ 1 ] {} <Error> CertificateReloader: Poco::Exception. Code: 1000, e.code() = 0, SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library:OPENSSL_internal:No such file or directory (version 22.3.2.1)
2022.03.30 09:26:19.607711 [ 1 ] {} <Error> Application: Caught exception while loading metadata: Code: 62. DB::Exception: Syntax error (in file /var/lib/clickhouse/store/990/9909bef6-626f-4868-9909-bef6626f4868/jira_issue.sql): failed at position 5242 (',') (line 113, col 62): , '\\d+')), sprints)) AS first_sprint,
arrayReduce('max', arrayMap(s -> toUInt32OrNull(extract(s, '\\d+')), sprints)) AS last_sprint,
JSONExtractString(. Expected one of: FROM, end of query: Cannot parse definition from metadata file /var/lib/clickhouse/store/990/9909bef6-626f-4868-9909-bef6626f4868/jira_issue.sql. (SYNTAX_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa4dde1a in /usr/bin/clickhouse
1. DB::DatabaseOnDisk::parseQueryFromMetadata(Poco::Logger*, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, bool) @ 0x13e8d36d in /usr/bin/clickhouse
2. ? @ 0x13f3337a in /usr/bin/clickhouse
3. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xa5878ca in /usr/bin/clickhouse
4. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0xa589a64 in /usr/bin/clickhouse
5. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa584c97 in /usr/bin/clickhouse
6. ? @ 0xa58881d in /usr/bin/clickhouse
7. ? @ 0x7fd533101609 in ?
8. __clone @ 0x7fd533026163 in ?
(version 22.3.2.1)
``` | https://github.com/ClickHouse/ClickHouse/issues/35751 | https://github.com/ClickHouse/ClickHouse/pull/35799 | f2c6387a8d7c8568cd17df7fdd0c992596bb1b20 | cafff71d2f794c4cad45338be6cd7fc89a311a3b | "2022-03-30T09:35:10Z" | c++ | "2022-04-01T08:59:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 35,726 | ["src/Core/Settings.h", "src/DataTypes/Serializations/SerializationIP.cpp", "src/Formats/FormatFactory.cpp", "src/Formats/FormatSettings.h", "tests/queries/0_stateless/02244_ip_address_invalid_insert.reference", "tests/queries/0_stateless/02244_ip_address_invalid_insert.sql"] | 22.3 toIPv6 backward incompatible despite cast_ipv4_ipv6_default_on_conversion_error | ```sql
CREATE TABLE test_tbl
( `ip` String, `ipv6` IPv6 MATERIALIZED toIPv6(ip) )
ENGINE = Memory;
insert into test_tbl(ip) values ( 'fe80::9801:43ff:fe1f:7690'), ('1.1.1.1'), (''), ('::ffff:1.1.1.1' );
DB::Exception: Invalid IPv6 value: while executing 'FUNCTION _CAST(toIPv6(ip) :: 2, 'IPv6' :: 1)
set cast_ipv4_ipv6_default_on_conversion_error=1;
insert into test_tbl(ip) values ( 'fe80::9801:43ff:fe1f:7690'), ('1.1.1.1'), (''), ('::ffff:1.1.1.1' );
DB::Exception: Invalid IPv6 value: while executing 'FUNCTION _CAST(toIPv6(ip)
```
`toIPv6OrDefault` does not help in this case.
```sql
CREATE TABLE test_tbl
( `ip` String, `ipv6` IPv6 MATERIALIZED toIPv6OrDefault(ip) )
ENGINE = Memory;
insert into test_tbl(ip) values ( 'fe80::9801:43ff:fe1f:7690'), ('1.1.1.1'), (''), ('::ffff:1.1.1.1' );
DB::Exception: Invalid IPv6 value: while executing 'FUNCTION _CAST(toIPv6OrD
insert into test_tbl(ip) values ( 'fe80::9801:43ff:fe1f:7690'), ('1.1.1.1'), (''), ('::ffff:1.1.1.1' ), ('garbudge');
DB::Exception: Invalid IPv6 value: while executing 'FUNCTION
```
WA:
```sql
CREATE TABLE test_tbl
( `ip` String, `ipv6` IPv6 MATERIALIZED if(ip='', '::', ip) )
ENGINE = Memory;
insert into test_tbl(ip) values ( 'fe80::9801:43ff:fe1f:7690'), ('1.1.1.1'), (''), ('::ffff:1.1.1.1' );
select ip, ipv6 from test_tbl;
ββipβββββββββββββββββββββββββ¬βipv6βββββββββββββββββββββββ
β fe80::9801:43ff:fe1f:7690 β fe80::9801:43ff:fe1f:7690 β
β 1.1.1.1 β ::ffff:1.1.1.1 β
β β :: β
β ::ffff:1.1.1.1 β ::ffff:1.1.1.1 β
βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/35726 | https://github.com/ClickHouse/ClickHouse/pull/35733 | 19819c72f88e037ba0ed8c368b8f3bdc28faabad | e6c9a36ac79f7c3cd29a47a906d502667fc2c93d | "2022-03-29T15:16:18Z" | c++ | "2022-04-04T10:28:16Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.