status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,384 | ["programs/install/Install.cpp"] | Unable to install ClickHouse in pfsense FreeBSD |
**Describe the unexpected behaviour**
Trying to install clickhouse in FreeBSD (pfsense firewall), following this [link ](https://clickhouse.com/#quick-start)
**How to reproduce**
1 - copy the ClickHouse file into pfsense from [here](https://builds.clickhouse.com/master/freebsd/clickhouse)
2 - chmod a+x ./clickhouse
3 - sudo ./clickhouse install
**Which ClickHouse server version to use**
- 21.12.1.8928 (official build
**Error message and/or stacktrace**
Code: 107. DB::Exception: Cannot obtain path to the binary from /proc/self/exe, file doesn't exist. (FILE_DOESNT_EXIST) (version 21.12.1.8928 (official build))

| https://github.com/ClickHouse/ClickHouse/issues/33384 | https://github.com/ClickHouse/ClickHouse/pull/33418 | 7b0aa12630afe76c9e31863e08bfe12d9f4d18e3 | 675fc6ba0758de521ae0b658892fe7d6661fc762 | "2022-01-04T09:50:30Z" | c++ | "2022-01-06T19:51:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,381 | ["contrib/NuRaft", "src/Coordination/CoordinationSettings.cpp", "src/Coordination/CoordinationSettings.h", "src/Coordination/KeeperDispatcher.cpp", "src/Coordination/KeeperServer.cpp", "src/Coordination/KeeperServer.h"] | Cannot allocate RAFT instance. (RAFT_ERROR) due to "Address family not supported by protocol" (IPv6 disabled) |
Processing configuration file '/apps/conf/clickhouse-server/config.xml'.
Logging trace to /apps/logs/clickhouse-server/clickhouse-server.log
Logging errors to /apps/logs/clickhouse-server/clickhouse-server.err.log
Logging trace to console
2022.01.04 15:35:45.698159 [ 399760 ] {} <Information> SentryWriter: Sending crash reports is disabled
2022.01.04 15:35:45.713762 [ 399760 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2022.01.04 15:35:45.778619 [ 399760 ] {} <Information> : Starting ClickHouse Keeper 21.12.1.9017 with revision 54457, build id: 1E33508A410EF9C9, PID 399760
2022.01.04 15:35:45.778823 [ 399760 ] {} <Information> Application: starting up
2022.01.04 15:35:45.778863 [ 399760 ] {} <Information> Application: OS Name = Linux, OS Version = 3.10.0-862.9.1.el7.x86_64, OS Architecture = x86_64
2022.01.04 15:35:45.779748 [ 399760 ] {} <Debug> Application: Initializing DateLUT.
2022.01.04 15:35:45.779788 [ 399760 ] {} <Trace> Application: Initialized DateLUT with time zone 'Asia/Shanghai'.
2022.01.04 15:35:45.780169 [ 399760 ] {} <Information> Context: Cannot connect to ZooKeeper (or Keeper) before internal Keeper start, will wait for Keeper synchronously
2022.01.04 15:35:45.780204 [ 399760 ] {} <Debug> KeeperDispatcher: Initializing storage dispatcher
2022.01.04 15:35:45.783134 [ 399760 ] {} <Warning> KeeperLogStore: No logs exists in /apps/dbdat/clickhouse/coordination/log. It's Ok if it's the first run of clickhouse-keeper.
2022.01.04 15:35:45.783216 [ 399760 ] {} <Information> KeeperLogStore: force_sync enabled
2022.01.04 15:35:45.783243 [ 399760 ] {} <Debug> KeeperDispatcher: Waiting server to initialize
2022.01.04 15:35:45.783281 [ 399760 ] {} <Debug> KeeperStateMachine: Totally have 0 snapshots
2022.01.04 15:35:45.783305 [ 399760 ] {} <Debug> KeeperStateMachine: No existing snapshots, last committed log index 0
2022.01.04 15:35:45.783342 [ 399760 ] {} <Warning> KeeperLogStore: Removing all changelogs
2022.01.04 15:35:45.783374 [ 399760 ] {} <Trace> KeeperLogStore: Starting new changelog /apps/dbdat/clickhouse/coordination/log/changelog_1_100000.bin.zstd
2022.01.04 15:35:45.783445 [ 399760 ] {} <Information> KeeperServer: No config in log store and snapshot, probably it's initial run. Will use config from .xml on disk
2022.01.04 15:35:45.786780 [ 399760 ] {} <Error> RaftInstance: got exception: open: Address family not supported by protocol
2022.01.04 15:35:45.787127 [ 399760 ] {} <Error> void DB::KeeperDispatcher::initialize(const Poco::Util::AbstractConfiguration &, bool, bool): Code: 568. DB::Exception: Cannot allocate RAFT instance. (RAFT_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa20c85a in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
1. DB::KeeperServer::startup() @ 0x142cd613 in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
2. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x142b9f27 in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
3. DB::Context::initializeKeeperDispatcher(bool) const @ 0x12c6cfde in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
4. DB::Keeper::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xa3b9c96 in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
5. Poco::Util::Application::run() @ 0x16ef7d06 in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
6. DB::Keeper::run() @ 0xa3b6f94 in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
7. mainEntryClickHouseKeeper(int, char**) @ 0xa3b5977 in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
8. main @ 0xa206cca in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
9. __libc_start_main @ 0x21b35 in /usr/lib64/libc-2.17.so
10. _start @ 0xa09b1ee in /apps/svr/clickhouse2112/usr/bin/clickhouse-keeper
(version 21.12.1.9017 (official build))
2022.01.04 15:35:45.789007 [ 399760 ] {} <Debug> KeeperDispatcher: Shutting down storage dispatcher
2022.01.04 15:35:45.789254 [ 399760 ] {} <Information> KeeperServer: RAFT doesn't start, shutdown not required
2022.01.04 15:35:45.789278 [ 399760 ] {} <Debug> KeeperDispatcher: Dispatcher shut down
2022.01.04 15:35:45.789403 [ 399769 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 42
2022.01.04 15:35:45.789450 [ 399779 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 37
2022.01.04 15:35:45.789571 [ 399784 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 32
2022.01.04 15:35:45.789696 [ 399796 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 20
2022.01.04 15:35:45.789696 [ 399797 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 19
2022.01.04 15:35:45.789736 [ 399799 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 17
2022.01.04 15:35:45.789858 [ 399807 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 9
2022.01.04 15:35:45.790247 [ 399768 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 47
2022.01.04 15:35:45.790840 [ 399772 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 45
2022.01.04 15:35:45.791700 [ 399774 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 43
2022.01.04 15:35:45.791711 [ 399773 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 44
2022.01.04 15:35:45.792596 [ 399775 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 41
2022.01.04 15:35:45.793035 [ 399776 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 40
2022.01.04 15:35:45.793483 [ 399778 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 38
2022.01.04 15:35:45.794059 [ 399777 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 39
2022.01.04 15:35:45.794528 [ 399780 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 36
2022.01.04 15:35:45.795165 [ 399782 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 34
2022.01.04 15:35:45.795197 [ 399785 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 31
2022.01.04 15:35:45.795204 [ 399786 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 30
2022.01.04 15:35:45.795191 [ 399783 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 33
2022.01.04 15:35:45.795210 [ 399788 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 28
2022.01.04 15:35:45.795607 [ 399781 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 35
2022.01.04 15:35:45.795223 [ 399789 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 27
2022.01.04 15:35:45.795220 [ 399787 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 29
2022.01.04 15:35:45.795616 [ 399790 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 26
2022.01.04 15:35:45.795632 [ 399791 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 25
2022.01.04 15:35:45.796511 [ 399792 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 24
2022.01.04 15:35:45.796525 [ 399793 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 23
2022.01.04 15:35:45.796520 [ 399794 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 22
2022.01.04 15:35:45.797407 [ 399795 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 21
2022.01.04 15:35:45.797413 [ 399771 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 46
2022.01.04 15:35:45.797413 [ 399800 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 16
2022.01.04 15:35:45.797418 [ 399802 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 14
2022.01.04 15:35:45.797834 [ 399801 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 15
2022.01.04 15:35:45.797425 [ 399803 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 13
2022.01.04 15:35:45.797827 [ 399798 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 18
2022.01.04 15:35:45.797838 [ 399804 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 12
2022.01.04 15:35:45.798306 [ 399806 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 10
2022.01.04 15:35:45.798330 [ 399811 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 5
2022.01.04 15:35:45.798320 [ 399809 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 7
2022.01.04 15:35:45.798330 [ 399810 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 6
2022.01.04 15:35:45.798337 [ 399812 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 4
2022.01.04 15:35:45.798342 [ 399813 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 3
2022.01.04 15:35:45.798358 [ 399770 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 0
2022.01.04 15:35:45.798310 [ 399808 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 8
2022.01.04 15:35:45.798361 [ 399815 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 1
2022.01.04 15:35:45.798361 [ 399814 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 2
2022.01.04 15:35:45.798733 [ 399805 ] {} <Information> RaftInstance: end of asio worker thread, remaining threads: 11
2022.01.04 15:35:45.799620 [ 399760 ] {} <Error> Application: DB::Exception: Cannot allocate RAFT instance
2022.01.04 15:35:45.799663 [ 399760 ] {} <Information> Application: shutting down
2022.01.04 15:35:45.799681 [ 399760 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2022.01.04 15:35:45.799776 [ 399761 ] {} <Trace> BaseDaemon: Received signal -2
2022.01.04 15:35:45.799845 [ 399761 ] {} <Information> BaseDaemon: Stop SignalListener thread
I just `ln -s clickhouse clickhouse-keeper`
version 21.12.1.9017
how to deal with this problem? | https://github.com/ClickHouse/ClickHouse/issues/33381 | https://github.com/ClickHouse/ClickHouse/pull/33450 | 05a5a81f7a6c99424ddbe343a1293f4345d719a3 | 6dbbf6b4dd608afa4ba3436008aa7e1e9865430c | "2022-01-04T07:40:54Z" | c++ | "2022-01-08T00:43:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,345 | ["docker/builder/Dockerfile", "docker/builder/Makefile", "docker/builder/README.md", "docker/builder/build.sh"] | build error: unable to find library -lsocket | > Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.yandex/docs/en/development/build/
**Environment**
Builded in fresh docker 'clickhouse/binary-builder' at master branch and git commit id is "28f785627f0936b014e5185f5f7ae7300a074370"
Error log:
```
[9888/10168] Linking CXX executable utils/zookeeper-create-entry-to-download-part/zookeeper-create-entry-to-download-part
FAILED: utils/zookeeper-create-entry-to-download-part/zookeeper-create-entry-to-download-part
: && /usr/bin/clang++-13 --target=x86_64-linux-gnu --sysroot=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -include /project/jd/clickhouse/base/glibc-compatibility/glibc-compat-2.32.h -fsized-deallocation -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/project/jd/clickhouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Werror -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O3 -DNDEBUG --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld-13 -rdynamic -Wl,--no-undefined -no-pie -Wl,-no-pie src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/zookeeper-create-entry-to-download-part/CMakeFiles/zookeeper-create-entry-to-download-part.dir/main.cpp.o -o utils/zookeeper-create-entry-to-download-part/zookeeper-create-entry-to-download-part src/libclickhouse_new_delete.a src/libdbms.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a contrib/boost-cmake/lib_boost_program_options.a contrib/llvm/llvm/lib/libLLVMExecutionEngine.a contrib/llvm/llvm/lib/libLLVMRuntimeDyld.a contrib/llvm/llvm/lib/libLLVMX86CodeGen.a contrib/llvm/llvm/lib/libLLVMCFGuard.a contrib/llvm/llvm/lib/libLLVMX86Desc.a contrib/llvm/llvm/lib/libLLVMX86Info.a contrib/llvm/llvm/lib/libLLVMAsmPrinter.a contrib/llvm/llvm/lib/libLLVMDebugInfoDWARF.a contrib/llvm/llvm/lib/libLLVMGlobalISel.a contrib/llvm/llvm/lib/libLLVMSelectionDAG.a contrib/llvm/llvm/lib/libLLVMMCDisassembler.a contrib/llvm/llvm/lib/libLLVMPasses.a contrib/llvm/llvm/lib/libLLVMCoroutines.a contrib/llvm/llvm/lib/libLLVMHelloNew.a contrib/llvm/llvm/lib/libLLVMObjCARCOpts.a contrib/llvm/llvm/lib/libLLVMCodeGen.a contrib/llvm/llvm/lib/libLLVMipo.a contrib/llvm/llvm/lib/libLLVMFrontendOpenMP.a contrib/llvm/llvm/lib/libLLVMIRReader.a contrib/llvm/llvm/lib/libLLVMAsmParser.a contrib/llvm/llvm/lib/libLLVMLinker.a contrib/llvm/llvm/lib/libLLVMBitWriter.a contrib/llvm/llvm/lib/libLLVMInstrumentation.a contrib/llvm/llvm/lib/libLLVMScalarOpts.a contrib/llvm/llvm/lib/libLLVMAggressiveInstCombine.a contrib/llvm/llvm/lib/libLLVMInstCombine.a contrib/llvm/llvm/lib/libLLVMVectorize.a contrib/llvm/llvm/lib/libLLVMTransformUtils.a contrib/llvm/llvm/lib/libLLVMTarget.a contrib/llvm/llvm/lib/libLLVMAnalysis.a contrib/llvm/llvm/lib/libLLVMProfileData.a contrib/llvm/llvm/lib/libLLVMObject.a contrib/llvm/llvm/lib/libLLVMTextAPI.a contrib/llvm/llvm/lib/libLLVMBitReader.a contrib/llvm/llvm/lib/libLLVMCore.a contrib/llvm/llvm/lib/libLLVMRemarks.a contrib/llvm/llvm/lib/libLLVMBitstreamReader.a contrib/llvm/llvm/lib/libLLVMMCParser.a contrib/llvm/llvm/lib/libLLVMMC.a contrib/llvm/llvm/lib/libLLVMBinaryFormat.a contrib/llvm/llvm/lib/libLLVMDebugInfoCodeView.a contrib/llvm/llvm/lib/libLLVMDebugInfoMSF.a contrib/llvm/llvm/lib/libLLVMSupport.a contrib/llvm/llvm/lib/libLLVMDemangle.a contrib/croaring-cmake/libroaring.a contrib/cppkafka-cmake/libcppkafka.a contrib/librdkafka-cmake/librdkafka.a contrib/cyrus-sasl-cmake/libsasl2.a contrib/nuraft-cmake/libnuraft.a contrib/boost-cmake/lib_boost_coroutine.a src/Common/Config/libclickhouse_common_config.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a contrib/yaml-cpp-cmake/libyaml-cpp.a src/Dictionaries/Embedded/libclickhouse_dictionaries_embedded.a src/Parsers/libclickhouse_parsers.a src/Access/Common/libclickhouse_common_access.a contrib/poco-cmake/MongoDB/lib_poco_mongodb.a src/Common/mysqlxx/libmysqlxx.a src/libclickhouse_common_io.a contrib/boost-cmake/lib_boost_program_options.a contrib/jemalloc-cmake/libjemalloc.a src/Common/StringUtils/libstring_utils.a base/base/libcommon.a contrib/poco-cmake/Net/SSL/lib_poco_net_ssl.a contrib/poco-cmake/Net/lib_poco_net.a contrib/poco-cmake/Crypto/lib_poco_crypto.a contrib/poco-cmake/Util/lib_poco_util.a contrib/poco-cmake/JSON/lib_poco_json.a contrib/poco-cmake/JSON/lib_poco_json_pdjson.a contrib/poco-cmake/XML/lib_poco_xml.a contrib/poco-cmake/XML/lib_poco_xml_expat.a contrib/replxx-cmake/libreplxx.a contrib/cctz-cmake/libcctz.a -Wl,--whole-archive /project/jd/clickhouse/build/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/libfmt.a base/widechar_width/libwidechar_width.a contrib/dragonbox-cmake/libdragonbox_to_chars.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/libcpuid.a contrib/cityhash102/libcityhash.a contrib/poco-cmake/Foundation/lib_poco_foundation.a contrib/poco-cmake/Foundation/lib_poco_foundation_pcre.a contrib/xz-cmake/libliblzma.a contrib/aws-s3-cmake/libaws_s3.a contrib/aws-s3-cmake/libaws_s3_checksums.a contrib/azure-cmake/libazure_sdk.a contrib/curl-cmake/libcurl.a contrib/brotli-cmake/libbrotli.a contrib/bzip2-cmake/libbzip2.a contrib/mariadb-connector-c-cmake/libmariadbclient.a contrib/boost-cmake/lib_boost_system.a contrib/icu-cmake/libicui18n.a contrib/icu-cmake/libicuuc.a contrib/icu-cmake/libicudata.a contrib/capnproto-cmake/libcapnpc.a contrib/capnproto-cmake/libcapnp.a contrib/capnproto-cmake/libkj.a contrib/arrow-cmake/libparquet_static.a contrib/arrow-cmake/libarrow_static.a contrib/boost-cmake/lib_boost_filesystem.a contrib/double-conversion-cmake/libdouble-conversion.a contrib/flatbuffers/libflatbuffers.a contrib/arrow-cmake/libthrift_static.a contrib/avro-cmake/libavrocpp.a contrib/boost-cmake/lib_boost_iostreams.a contrib/openldap-cmake/libldap_r.a contrib/openldap-cmake/liblber.a src/Server/grpc_protos/libclickhouse_grpc_protos.a contrib/grpc/libgrpc++.a contrib/grpc/libgrpc.a contrib/re2-cmake/libre2.a contrib/grpc/third_party/cares/cares/lib/libcares.a -lresolv -lsocket contrib/abseil-cpp/absl/status/libabsl_status.a contrib/grpc/libaddress_sorting.a contrib/grpc/libupb.a contrib/grpc/libgpr.a -ldl contrib/libhdfs3-cmake/libhdfs3.a contrib/protobuf-cmake/liblibprotobuf.a contrib/libgsasl-cmake/libgsasl.a contrib/krb5-cmake/libkrb5.a contrib/libxml2-cmake/liblibxml2.a contrib/s2geometry-cmake/libs2.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/strings/libabsl_str_format_internal.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/amqpcpp-cmake/libamqp-cpp.a contrib/libuv-cmake/libuv_a.a contrib/sqlite-cmake/libsqlite.a contrib/cassandra-cmake/libcassandra.a contrib/libuv-cmake/libuv.a -lrt contrib/rocksdb-cmake/librocksdb.a contrib/lz4-cmake/liblz4.a contrib/zstd-cmake/libzstd.a contrib/zlib-ng-cmake/libzlib.a contrib/snappy-cmake/libsnappy.a contrib/libpqxx-cmake/liblibpqxx.a contrib/libpq-cmake/liblibpq.a contrib/boringssl-cmake/libssl.a contrib/boringssl-cmake/libcrypto.a -lpthread contrib/boost-cmake/lib_boost_context.a contrib/libstemmer-c-cmake/libstemmer.a contrib/wordnet-blast-cmake/libwnb.a contrib/boost-cmake/lib_boost_graph.a contrib/boost-cmake/lib_boost_regex.a contrib/lemmagen-c-cmake/liblemmagen.a contrib/simdjson-cmake/libsimdjson.a contrib/consistent-hashing/libconsistent-hashing.a -Wl,--start-group base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-13/lib/clang/13.0.1/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-13: error: unable to find library -lsocket
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[9889/10168] Linking CXX executable utils/checksum-for-compressed-block/checksum-for-compressed-block-find-bit-flips
FAILED: utils/checksum-for-compressed-block/checksum-for-compressed-block-find-bit-flips
: && /usr/bin/clang++-13 --target=x86_64-linux-gnu --sysroot=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -include /project/jd/clickhouse/base/glibc-compatibility/glibc-compat-2.32.h -fsized-deallocation -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/project/jd/clickhouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Werror -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O3 -DNDEBUG --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld-13 -rdynamic -Wl,--no-undefined -no-pie -Wl,-no-pie src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/checksum-for-compressed-block/CMakeFiles/checksum-for-compressed-block-find-bit-flips.dir/main.cpp.o -o utils/checksum-for-compressed-block/checksum-for-compressed-block-find-bit-flips src/libclickhouse_new_delete.a src/libdbms.a contrib/llvm/llvm/lib/libLLVMExecutionEngine.a contrib/llvm/llvm/lib/libLLVMRuntimeDyld.a contrib/llvm/llvm/lib/libLLVMX86CodeGen.a contrib/llvm/llvm/lib/libLLVMCFGuard.a contrib/llvm/llvm/lib/libLLVMX86Desc.a contrib/llvm/llvm/lib/libLLVMX86Info.a contrib/llvm/llvm/lib/libLLVMAsmPrinter.a contrib/llvm/llvm/lib/libLLVMDebugInfoDWARF.a contrib/llvm/llvm/lib/libLLVMGlobalISel.a contrib/llvm/llvm/lib/libLLVMSelectionDAG.a contrib/llvm/llvm/lib/libLLVMMCDisassembler.a contrib/llvm/llvm/lib/libLLVMPasses.a contrib/llvm/llvm/lib/libLLVMCoroutines.a contrib/llvm/llvm/lib/libLLVMHelloNew.a contrib/llvm/llvm/lib/libLLVMObjCARCOpts.a contrib/llvm/llvm/lib/libLLVMCodeGen.a contrib/llvm/llvm/lib/libLLVMipo.a contrib/llvm/llvm/lib/libLLVMFrontendOpenMP.a contrib/llvm/llvm/lib/libLLVMIRReader.a contrib/llvm/llvm/lib/libLLVMAsmParser.a contrib/llvm/llvm/lib/libLLVMLinker.a contrib/llvm/llvm/lib/libLLVMBitWriter.a contrib/llvm/llvm/lib/libLLVMInstrumentation.a contrib/llvm/llvm/lib/libLLVMScalarOpts.a contrib/llvm/llvm/lib/libLLVMAggressiveInstCombine.a contrib/llvm/llvm/lib/libLLVMInstCombine.a contrib/llvm/llvm/lib/libLLVMVectorize.a contrib/llvm/llvm/lib/libLLVMTransformUtils.a contrib/llvm/llvm/lib/libLLVMTarget.a contrib/llvm/llvm/lib/libLLVMAnalysis.a contrib/llvm/llvm/lib/libLLVMProfileData.a contrib/llvm/llvm/lib/libLLVMObject.a contrib/llvm/llvm/lib/libLLVMTextAPI.a contrib/llvm/llvm/lib/libLLVMBitReader.a contrib/llvm/llvm/lib/libLLVMCore.a contrib/llvm/llvm/lib/libLLVMRemarks.a contrib/llvm/llvm/lib/libLLVMBitstreamReader.a contrib/llvm/llvm/lib/libLLVMMCParser.a contrib/llvm/llvm/lib/libLLVMMC.a contrib/llvm/llvm/lib/libLLVMBinaryFormat.a contrib/llvm/llvm/lib/libLLVMDebugInfoCodeView.a contrib/llvm/llvm/lib/libLLVMDebugInfoMSF.a contrib/llvm/llvm/lib/libLLVMSupport.a contrib/llvm/llvm/lib/libLLVMDemangle.a contrib/croaring-cmake/libroaring.a contrib/cppkafka-cmake/libcppkafka.a contrib/librdkafka-cmake/librdkafka.a contrib/cyrus-sasl-cmake/libsasl2.a contrib/nuraft-cmake/libnuraft.a contrib/boost-cmake/lib_boost_coroutine.a src/Common/Config/libclickhouse_common_config.a contrib/yaml-cpp-cmake/libyaml-cpp.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a src/Dictionaries/Embedded/libclickhouse_dictionaries_embedded.a src/Parsers/libclickhouse_parsers.a src/Access/Common/libclickhouse_common_access.a contrib/poco-cmake/MongoDB/lib_poco_mongodb.a src/Common/mysqlxx/libmysqlxx.a src/libclickhouse_common_io.a contrib/jemalloc-cmake/libjemalloc.a contrib/boost-cmake/lib_boost_program_options.a src/Common/StringUtils/libstring_utils.a base/widechar_width/libwidechar_width.a base/base/libcommon.a contrib/poco-cmake/Net/SSL/lib_poco_net_ssl.a contrib/poco-cmake/Net/lib_poco_net.a contrib/poco-cmake/Crypto/lib_poco_crypto.a contrib/poco-cmake/Util/lib_poco_util.a contrib/poco-cmake/JSON/lib_poco_json.a contrib/poco-cmake/JSON/lib_poco_json_pdjson.a contrib/poco-cmake/XML/lib_poco_xml.a contrib/poco-cmake/XML/lib_poco_xml_expat.a contrib/replxx-cmake/libreplxx.a contrib/cctz-cmake/libcctz.a -Wl,--whole-archive /project/jd/clickhouse/build/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/libfmt.a contrib/dragonbox-cmake/libdragonbox_to_chars.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/libcpuid.a contrib/cityhash102/libcityhash.a contrib/poco-cmake/Foundation/lib_poco_foundation.a contrib/poco-cmake/Foundation/lib_poco_foundation_pcre.a contrib/xz-cmake/libliblzma.a contrib/aws-s3-cmake/libaws_s3.a contrib/aws-s3-cmake/libaws_s3_checksums.a contrib/azure-cmake/libazure_sdk.a contrib/curl-cmake/libcurl.a contrib/brotli-cmake/libbrotli.a contrib/bzip2-cmake/libbzip2.a contrib/mariadb-connector-c-cmake/libmariadbclient.a contrib/boost-cmake/lib_boost_system.a contrib/icu-cmake/libicui18n.a contrib/icu-cmake/libicuuc.a contrib/icu-cmake/libicudata.a contrib/capnproto-cmake/libcapnpc.a contrib/capnproto-cmake/libcapnp.a contrib/capnproto-cmake/libkj.a contrib/arrow-cmake/libparquet_static.a contrib/arrow-cmake/libarrow_static.a contrib/boost-cmake/lib_boost_filesystem.a contrib/double-conversion-cmake/libdouble-conversion.a contrib/flatbuffers/libflatbuffers.a contrib/arrow-cmake/libthrift_static.a contrib/avro-cmake/libavrocpp.a contrib/boost-cmake/lib_boost_iostreams.a contrib/openldap-cmake/libldap_r.a contrib/openldap-cmake/liblber.a src/Server/grpc_protos/libclickhouse_grpc_protos.a contrib/grpc/libgrpc++.a contrib/grpc/libgrpc.a contrib/re2-cmake/libre2.a contrib/grpc/third_party/cares/cares/lib/libcares.a -lresolv -lsocket contrib/abseil-cpp/absl/status/libabsl_status.a contrib/grpc/libaddress_sorting.a contrib/grpc/libupb.a contrib/grpc/libgpr.a -ldl contrib/libhdfs3-cmake/libhdfs3.a contrib/protobuf-cmake/liblibprotobuf.a contrib/libgsasl-cmake/libgsasl.a contrib/krb5-cmake/libkrb5.a contrib/libxml2-cmake/liblibxml2.a contrib/s2geometry-cmake/libs2.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/strings/libabsl_str_format_internal.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/amqpcpp-cmake/libamqp-cpp.a contrib/libuv-cmake/libuv_a.a contrib/sqlite-cmake/libsqlite.a contrib/cassandra-cmake/libcassandra.a contrib/libuv-cmake/libuv.a -lrt contrib/rocksdb-cmake/librocksdb.a contrib/lz4-cmake/liblz4.a contrib/zstd-cmake/libzstd.a contrib/zlib-ng-cmake/libzlib.a contrib/snappy-cmake/libsnappy.a contrib/libpqxx-cmake/liblibpqxx.a contrib/libpq-cmake/liblibpq.a contrib/boringssl-cmake/libssl.a contrib/boringssl-cmake/libcrypto.a -lpthread contrib/boost-cmake/lib_boost_context.a contrib/libstemmer-c-cmake/libstemmer.a contrib/wordnet-blast-cmake/libwnb.a contrib/boost-cmake/lib_boost_graph.a contrib/boost-cmake/lib_boost_regex.a contrib/lemmagen-c-cmake/liblemmagen.a contrib/simdjson-cmake/libsimdjson.a contrib/consistent-hashing/libconsistent-hashing.a -Wl,--start-group base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-13/lib/clang/13.0.1/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-13: error: unable to find library -lsocket
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[9890/10168] Linking CXX executable utils/check-mysql-binlog/check-mysql-binlog
FAILED: utils/check-mysql-binlog/check-mysql-binlog
: && /usr/bin/clang++-13 --target=x86_64-linux-gnu --sysroot=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -include /project/jd/clickhouse/base/glibc-compatibility/glibc-compat-2.32.h -fsized-deallocation -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/project/jd/clickhouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Werror -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O3 -DNDEBUG --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld-13 -rdynamic -Wl,--no-undefined -no-pie -Wl,-no-pie src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/check-mysql-binlog/CMakeFiles/check-mysql-binlog.dir/main.cpp.o -o utils/check-mysql-binlog/check-mysql-binlog src/libclickhouse_new_delete.a src/libdbms.a contrib/boost-cmake/lib_boost_program_options.a contrib/llvm/llvm/lib/libLLVMExecutionEngine.a contrib/llvm/llvm/lib/libLLVMRuntimeDyld.a contrib/llvm/llvm/lib/libLLVMX86CodeGen.a contrib/llvm/llvm/lib/libLLVMCFGuard.a contrib/llvm/llvm/lib/libLLVMX86Desc.a contrib/llvm/llvm/lib/libLLVMX86Info.a contrib/llvm/llvm/lib/libLLVMAsmPrinter.a contrib/llvm/llvm/lib/libLLVMDebugInfoDWARF.a contrib/llvm/llvm/lib/libLLVMGlobalISel.a contrib/llvm/llvm/lib/libLLVMSelectionDAG.a contrib/llvm/llvm/lib/libLLVMMCDisassembler.a contrib/llvm/llvm/lib/libLLVMPasses.a contrib/llvm/llvm/lib/libLLVMCoroutines.a contrib/llvm/llvm/lib/libLLVMHelloNew.a contrib/llvm/llvm/lib/libLLVMObjCARCOpts.a contrib/llvm/llvm/lib/libLLVMCodeGen.a contrib/llvm/llvm/lib/libLLVMipo.a contrib/llvm/llvm/lib/libLLVMFrontendOpenMP.a contrib/llvm/llvm/lib/libLLVMIRReader.a contrib/llvm/llvm/lib/libLLVMAsmParser.a contrib/llvm/llvm/lib/libLLVMLinker.a contrib/llvm/llvm/lib/libLLVMBitWriter.a contrib/llvm/llvm/lib/libLLVMInstrumentation.a contrib/llvm/llvm/lib/libLLVMScalarOpts.a contrib/llvm/llvm/lib/libLLVMAggressiveInstCombine.a contrib/llvm/llvm/lib/libLLVMInstCombine.a contrib/llvm/llvm/lib/libLLVMVectorize.a contrib/llvm/llvm/lib/libLLVMTransformUtils.a contrib/llvm/llvm/lib/libLLVMTarget.a contrib/llvm/llvm/lib/libLLVMAnalysis.a contrib/llvm/llvm/lib/libLLVMProfileData.a contrib/llvm/llvm/lib/libLLVMObject.a contrib/llvm/llvm/lib/libLLVMTextAPI.a contrib/llvm/llvm/lib/libLLVMBitReader.a contrib/llvm/llvm/lib/libLLVMCore.a contrib/llvm/llvm/lib/libLLVMRemarks.a contrib/llvm/llvm/lib/libLLVMBitstreamReader.a contrib/llvm/llvm/lib/libLLVMMCParser.a contrib/llvm/llvm/lib/libLLVMMC.a contrib/llvm/llvm/lib/libLLVMBinaryFormat.a contrib/llvm/llvm/lib/libLLVMDebugInfoCodeView.a contrib/llvm/llvm/lib/libLLVMDebugInfoMSF.a contrib/llvm/llvm/lib/libLLVMSupport.a contrib/llvm/llvm/lib/libLLVMDemangle.a contrib/croaring-cmake/libroaring.a contrib/cppkafka-cmake/libcppkafka.a contrib/librdkafka-cmake/librdkafka.a contrib/cyrus-sasl-cmake/libsasl2.a contrib/nuraft-cmake/libnuraft.a contrib/boost-cmake/lib_boost_coroutine.a src/Common/Config/libclickhouse_common_config.a contrib/yaml-cpp-cmake/libyaml-cpp.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a src/Dictionaries/Embedded/libclickhouse_dictionaries_embedded.a src/Parsers/libclickhouse_parsers.a src/Access/Common/libclickhouse_common_access.a contrib/poco-cmake/MongoDB/lib_poco_mongodb.a src/Common/mysqlxx/libmysqlxx.a src/libclickhouse_common_io.a contrib/boost-cmake/lib_boost_program_options.a contrib/jemalloc-cmake/libjemalloc.a src/Common/StringUtils/libstring_utils.a base/widechar_width/libwidechar_width.a base/base/libcommon.a contrib/poco-cmake/Net/SSL/lib_poco_net_ssl.a contrib/poco-cmake/Net/lib_poco_net.a contrib/poco-cmake/Crypto/lib_poco_crypto.a contrib/poco-cmake/Util/lib_poco_util.a contrib/poco-cmake/JSON/lib_poco_json.a contrib/poco-cmake/JSON/lib_poco_json_pdjson.a contrib/poco-cmake/XML/lib_poco_xml.a contrib/poco-cmake/XML/lib_poco_xml_expat.a contrib/replxx-cmake/libreplxx.a contrib/cctz-cmake/libcctz.a -Wl,--whole-archive /project/jd/clickhouse/build/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/libfmt.a contrib/dragonbox-cmake/libdragonbox_to_chars.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/libcpuid.a contrib/cityhash102/libcityhash.a contrib/poco-cmake/Foundation/lib_poco_foundation.a contrib/poco-cmake/Foundation/lib_poco_foundation_pcre.a contrib/xz-cmake/libliblzma.a contrib/aws-s3-cmake/libaws_s3.a contrib/aws-s3-cmake/libaws_s3_checksums.a contrib/azure-cmake/libazure_sdk.a contrib/curl-cmake/libcurl.a contrib/brotli-cmake/libbrotli.a contrib/bzip2-cmake/libbzip2.a contrib/mariadb-connector-c-cmake/libmariadbclient.a contrib/boost-cmake/lib_boost_system.a contrib/icu-cmake/libicui18n.a contrib/icu-cmake/libicuuc.a contrib/icu-cmake/libicudata.a contrib/capnproto-cmake/libcapnpc.a contrib/capnproto-cmake/libcapnp.a contrib/capnproto-cmake/libkj.a contrib/arrow-cmake/libparquet_static.a contrib/arrow-cmake/libarrow_static.a contrib/boost-cmake/lib_boost_filesystem.a contrib/double-conversion-cmake/libdouble-conversion.a contrib/flatbuffers/libflatbuffers.a contrib/arrow-cmake/libthrift_static.a contrib/avro-cmake/libavrocpp.a contrib/boost-cmake/lib_boost_iostreams.a contrib/openldap-cmake/libldap_r.a contrib/openldap-cmake/liblber.a src/Server/grpc_protos/libclickhouse_grpc_protos.a contrib/grpc/libgrpc++.a contrib/grpc/libgrpc.a contrib/re2-cmake/libre2.a contrib/grpc/third_party/cares/cares/lib/libcares.a -lresolv -lsocket contrib/abseil-cpp/absl/status/libabsl_status.a contrib/grpc/libaddress_sorting.a contrib/grpc/libupb.a contrib/grpc/libgpr.a -ldl contrib/libhdfs3-cmake/libhdfs3.a contrib/protobuf-cmake/liblibprotobuf.a contrib/libgsasl-cmake/libgsasl.a contrib/krb5-cmake/libkrb5.a contrib/libxml2-cmake/liblibxml2.a contrib/s2geometry-cmake/libs2.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/strings/libabsl_str_format_internal.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/amqpcpp-cmake/libamqp-cpp.a contrib/libuv-cmake/libuv_a.a contrib/sqlite-cmake/libsqlite.a contrib/cassandra-cmake/libcassandra.a contrib/libuv-cmake/libuv.a -lrt contrib/rocksdb-cmake/librocksdb.a contrib/lz4-cmake/liblz4.a contrib/zstd-cmake/libzstd.a contrib/zlib-ng-cmake/libzlib.a contrib/snappy-cmake/libsnappy.a contrib/libpqxx-cmake/liblibpqxx.a contrib/libpq-cmake/liblibpq.a contrib/boringssl-cmake/libssl.a contrib/boringssl-cmake/libcrypto.a -lpthread contrib/boost-cmake/lib_boost_context.a contrib/libstemmer-c-cmake/libstemmer.a contrib/wordnet-blast-cmake/libwnb.a contrib/boost-cmake/lib_boost_graph.a contrib/boost-cmake/lib_boost_regex.a contrib/lemmagen-c-cmake/liblemmagen.a contrib/simdjson-cmake/libsimdjson.a contrib/consistent-hashing/libconsistent-hashing.a -Wl,--start-group base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-13/lib/clang/13.0.1/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-13: error: unable to find library -lsocket
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[9891/10168] Linking CXX executable utils/memcpy-bench/memcpy-bench
FAILED: utils/memcpy-bench/memcpy-bench
: && /usr/bin/clang++-13 --target=x86_64-linux-gnu --sysroot=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -include /project/jd/clickhouse/base/glibc-compatibility/glibc-compat-2.32.h -fsized-deallocation -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/project/jd/clickhouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Werror -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O3 -DNDEBUG --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld-13 -rdynamic -Wl,--no-undefined -no-pie -Wl,-no-pie src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/memcpy-bench.cpp.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/FastMemcpy.cpp.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/FastMemcpy_Avx.cpp.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/memcpy_jart.S.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/glibc/memcpy-ssse3.S.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/glibc/memcpy-ssse3-back.S.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/glibc/memmove-sse2-unaligned-erms.S.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/glibc/memmove-avx-unaligned-erms.S.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/glibc/memmove-avx512-unaligned-erms.S.o utils/memcpy-bench/CMakeFiles/memcpy-bench.dir/glibc/memmove-avx512-no-vzeroupper.S.o -o utils/memcpy-bench/memcpy-bench src/libclickhouse_new_delete.a src/libdbms.a contrib/boost-cmake/lib_boost_program_options.a contrib/llvm/llvm/lib/libLLVMExecutionEngine.a contrib/llvm/llvm/lib/libLLVMRuntimeDyld.a contrib/llvm/llvm/lib/libLLVMX86CodeGen.a contrib/llvm/llvm/lib/libLLVMCFGuard.a contrib/llvm/llvm/lib/libLLVMX86Desc.a contrib/llvm/llvm/lib/libLLVMX86Info.a contrib/llvm/llvm/lib/libLLVMAsmPrinter.a contrib/llvm/llvm/lib/libLLVMDebugInfoDWARF.a contrib/llvm/llvm/lib/libLLVMGlobalISel.a contrib/llvm/llvm/lib/libLLVMSelectionDAG.a contrib/llvm/llvm/lib/libLLVMMCDisassembler.a contrib/llvm/llvm/lib/libLLVMPasses.a contrib/llvm/llvm/lib/libLLVMCoroutines.a contrib/llvm/llvm/lib/libLLVMHelloNew.a contrib/llvm/llvm/lib/libLLVMObjCARCOpts.a contrib/llvm/llvm/lib/libLLVMCodeGen.a contrib/llvm/llvm/lib/libLLVMipo.a contrib/llvm/llvm/lib/libLLVMFrontendOpenMP.a contrib/llvm/llvm/lib/libLLVMIRReader.a contrib/llvm/llvm/lib/libLLVMAsmParser.a contrib/llvm/llvm/lib/libLLVMLinker.a contrib/llvm/llvm/lib/libLLVMBitWriter.a contrib/llvm/llvm/lib/libLLVMInstrumentation.a contrib/llvm/llvm/lib/libLLVMScalarOpts.a contrib/llvm/llvm/lib/libLLVMAggressiveInstCombine.a contrib/llvm/llvm/lib/libLLVMInstCombine.a contrib/llvm/llvm/lib/libLLVMVectorize.a contrib/llvm/llvm/lib/libLLVMTransformUtils.a contrib/llvm/llvm/lib/libLLVMTarget.a contrib/llvm/llvm/lib/libLLVMAnalysis.a contrib/llvm/llvm/lib/libLLVMProfileData.a contrib/llvm/llvm/lib/libLLVMObject.a contrib/llvm/llvm/lib/libLLVMTextAPI.a contrib/llvm/llvm/lib/libLLVMBitReader.a contrib/llvm/llvm/lib/libLLVMCore.a contrib/llvm/llvm/lib/libLLVMRemarks.a contrib/llvm/llvm/lib/libLLVMBitstreamReader.a contrib/llvm/llvm/lib/libLLVMMCParser.a contrib/llvm/llvm/lib/libLLVMMC.a contrib/llvm/llvm/lib/libLLVMBinaryFormat.a contrib/llvm/llvm/lib/libLLVMDebugInfoCodeView.a contrib/llvm/llvm/lib/libLLVMDebugInfoMSF.a contrib/llvm/llvm/lib/libLLVMSupport.a contrib/llvm/llvm/lib/libLLVMDemangle.a contrib/croaring-cmake/libroaring.a contrib/cppkafka-cmake/libcppkafka.a contrib/librdkafka-cmake/librdkafka.a contrib/cyrus-sasl-cmake/libsasl2.a contrib/nuraft-cmake/libnuraft.a contrib/boost-cmake/lib_boost_coroutine.a src/Common/Config/libclickhouse_common_config.a contrib/yaml-cpp-cmake/libyaml-cpp.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a src/Dictionaries/Embedded/libclickhouse_dictionaries_embedded.a src/Parsers/libclickhouse_parsers.a src/Access/Common/libclickhouse_common_access.a contrib/poco-cmake/MongoDB/lib_poco_mongodb.a src/Common/mysqlxx/libmysqlxx.a src/libclickhouse_common_io.a contrib/boost-cmake/lib_boost_program_options.a contrib/jemalloc-cmake/libjemalloc.a src/Common/StringUtils/libstring_utils.a base/widechar_width/libwidechar_width.a base/base/libcommon.a contrib/poco-cmake/Net/SSL/lib_poco_net_ssl.a contrib/poco-cmake/Net/lib_poco_net.a contrib/poco-cmake/Crypto/lib_poco_crypto.a contrib/poco-cmake/Util/lib_poco_util.a contrib/poco-cmake/JSON/lib_poco_json.a contrib/poco-cmake/JSON/lib_poco_json_pdjson.a contrib/poco-cmake/XML/lib_poco_xml.a contrib/poco-cmake/XML/lib_poco_xml_expat.a contrib/replxx-cmake/libreplxx.a contrib/cctz-cmake/libcctz.a -Wl,--whole-archive /project/jd/clickhouse/build/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/libfmt.a contrib/dragonbox-cmake/libdragonbox_to_chars.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/libcpuid.a contrib/cityhash102/libcityhash.a contrib/poco-cmake/Foundation/lib_poco_foundation.a contrib/poco-cmake/Foundation/lib_poco_foundation_pcre.a contrib/xz-cmake/libliblzma.a contrib/aws-s3-cmake/libaws_s3.a contrib/aws-s3-cmake/libaws_s3_checksums.a contrib/azure-cmake/libazure_sdk.a contrib/curl-cmake/libcurl.a contrib/brotli-cmake/libbrotli.a contrib/bzip2-cmake/libbzip2.a contrib/mariadb-connector-c-cmake/libmariadbclient.a contrib/boost-cmake/lib_boost_system.a contrib/icu-cmake/libicui18n.a contrib/icu-cmake/libicuuc.a contrib/icu-cmake/libicudata.a contrib/capnproto-cmake/libcapnpc.a contrib/capnproto-cmake/libcapnp.a contrib/capnproto-cmake/libkj.a contrib/arrow-cmake/libparquet_static.a contrib/arrow-cmake/libarrow_static.a contrib/boost-cmake/lib_boost_filesystem.a contrib/double-conversion-cmake/libdouble-conversion.a contrib/flatbuffers/libflatbuffers.a contrib/arrow-cmake/libthrift_static.a contrib/avro-cmake/libavrocpp.a contrib/boost-cmake/lib_boost_iostreams.a contrib/openldap-cmake/libldap_r.a contrib/openldap-cmake/liblber.a src/Server/grpc_protos/libclickhouse_grpc_protos.a contrib/grpc/libgrpc++.a contrib/grpc/libgrpc.a contrib/re2-cmake/libre2.a contrib/grpc/third_party/cares/cares/lib/libcares.a -lresolv -lsocket contrib/abseil-cpp/absl/status/libabsl_status.a contrib/grpc/libaddress_sorting.a contrib/grpc/libupb.a contrib/grpc/libgpr.a -ldl contrib/libhdfs3-cmake/libhdfs3.a contrib/protobuf-cmake/liblibprotobuf.a contrib/libgsasl-cmake/libgsasl.a contrib/krb5-cmake/libkrb5.a contrib/libxml2-cmake/liblibxml2.a contrib/s2geometry-cmake/libs2.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/strings/libabsl_str_format_internal.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/amqpcpp-cmake/libamqp-cpp.a contrib/libuv-cmake/libuv_a.a contrib/sqlite-cmake/libsqlite.a contrib/cassandra-cmake/libcassandra.a contrib/libuv-cmake/libuv.a -lrt contrib/rocksdb-cmake/librocksdb.a contrib/lz4-cmake/liblz4.a contrib/zstd-cmake/libzstd.a contrib/zlib-ng-cmake/libzlib.a contrib/snappy-cmake/libsnappy.a contrib/libpqxx-cmake/liblibpqxx.a contrib/libpq-cmake/liblibpq.a contrib/boringssl-cmake/libssl.a contrib/boringssl-cmake/libcrypto.a -lpthread contrib/boost-cmake/lib_boost_context.a contrib/libstemmer-c-cmake/libstemmer.a contrib/wordnet-blast-cmake/libwnb.a contrib/boost-cmake/lib_boost_graph.a contrib/boost-cmake/lib_boost_regex.a contrib/lemmagen-c-cmake/liblemmagen.a contrib/simdjson-cmake/libsimdjson.a contrib/consistent-hashing/libconsistent-hashing.a -Wl,--start-group base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-13/lib/clang/13.0.1/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-13: error: unable to find library -lsocket
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[9892/10168] Linking CXX executable utils/compressor/decompress_perf
FAILED: utils/compressor/decompress_perf
: && /usr/bin/clang++-13 --target=x86_64-linux-gnu --sysroot=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -include /project/jd/clickhouse/base/glibc-compatibility/glibc-compat-2.32.h -fsized-deallocation -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/project/jd/clickhouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Werror -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O3 -DNDEBUG --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld-13 -rdynamic -Wl,--no-undefined -no-pie -Wl,-no-pie src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/compressor/CMakeFiles/decompress_perf.dir/decompress_perf.cpp.o -o utils/compressor/decompress_perf src/libclickhouse_new_delete.a src/libdbms.a contrib/lz4-cmake/liblz4.a contrib/llvm/llvm/lib/libLLVMExecutionEngine.a contrib/llvm/llvm/lib/libLLVMRuntimeDyld.a contrib/llvm/llvm/lib/libLLVMX86CodeGen.a contrib/llvm/llvm/lib/libLLVMCFGuard.a contrib/llvm/llvm/lib/libLLVMX86Desc.a contrib/llvm/llvm/lib/libLLVMX86Info.a contrib/llvm/llvm/lib/libLLVMAsmPrinter.a contrib/llvm/llvm/lib/libLLVMDebugInfoDWARF.a contrib/llvm/llvm/lib/libLLVMGlobalISel.a contrib/llvm/llvm/lib/libLLVMSelectionDAG.a contrib/llvm/llvm/lib/libLLVMMCDisassembler.a contrib/llvm/llvm/lib/libLLVMPasses.a contrib/llvm/llvm/lib/libLLVMCoroutines.a contrib/llvm/llvm/lib/libLLVMHelloNew.a contrib/llvm/llvm/lib/libLLVMObjCARCOpts.a contrib/llvm/llvm/lib/libLLVMCodeGen.a contrib/llvm/llvm/lib/libLLVMipo.a contrib/llvm/llvm/lib/libLLVMFrontendOpenMP.a contrib/llvm/llvm/lib/libLLVMIRReader.a contrib/llvm/llvm/lib/libLLVMAsmParser.a contrib/llvm/llvm/lib/libLLVMLinker.a contrib/llvm/llvm/lib/libLLVMBitWriter.a contrib/llvm/llvm/lib/libLLVMInstrumentation.a contrib/llvm/llvm/lib/libLLVMScalarOpts.a contrib/llvm/llvm/lib/libLLVMAggressiveInstCombine.a contrib/llvm/llvm/lib/libLLVMInstCombine.a contrib/llvm/llvm/lib/libLLVMVectorize.a contrib/llvm/llvm/lib/libLLVMTransformUtils.a contrib/llvm/llvm/lib/libLLVMTarget.a contrib/llvm/llvm/lib/libLLVMAnalysis.a contrib/llvm/llvm/lib/libLLVMProfileData.a contrib/llvm/llvm/lib/libLLVMObject.a contrib/llvm/llvm/lib/libLLVMTextAPI.a contrib/llvm/llvm/lib/libLLVMBitReader.a contrib/llvm/llvm/lib/libLLVMCore.a contrib/llvm/llvm/lib/libLLVMRemarks.a contrib/llvm/llvm/lib/libLLVMBitstreamReader.a contrib/llvm/llvm/lib/libLLVMMCParser.a contrib/llvm/llvm/lib/libLLVMMC.a contrib/llvm/llvm/lib/libLLVMBinaryFormat.a contrib/llvm/llvm/lib/libLLVMDebugInfoCodeView.a contrib/llvm/llvm/lib/libLLVMDebugInfoMSF.a contrib/llvm/llvm/lib/libLLVMSupport.a contrib/llvm/llvm/lib/libLLVMDemangle.a contrib/croaring-cmake/libroaring.a contrib/cppkafka-cmake/libcppkafka.a contrib/librdkafka-cmake/librdkafka.a contrib/cyrus-sasl-cmake/libsasl2.a contrib/nuraft-cmake/libnuraft.a contrib/boost-cmake/lib_boost_coroutine.a src/Common/Config/libclickhouse_common_config.a contrib/yaml-cpp-cmake/libyaml-cpp.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a src/Dictionaries/Embedded/libclickhouse_dictionaries_embedded.a src/Parsers/libclickhouse_parsers.a src/Access/Common/libclickhouse_common_access.a contrib/poco-cmake/MongoDB/lib_poco_mongodb.a src/Common/mysqlxx/libmysqlxx.a src/libclickhouse_common_io.a contrib/jemalloc-cmake/libjemalloc.a contrib/boost-cmake/lib_boost_program_options.a src/Common/StringUtils/libstring_utils.a base/widechar_width/libwidechar_width.a base/base/libcommon.a contrib/poco-cmake/Net/SSL/lib_poco_net_ssl.a contrib/poco-cmake/Net/lib_poco_net.a contrib/poco-cmake/Crypto/lib_poco_crypto.a contrib/poco-cmake/Util/lib_poco_util.a contrib/poco-cmake/JSON/lib_poco_json.a contrib/poco-cmake/JSON/lib_poco_json_pdjson.a contrib/poco-cmake/XML/lib_poco_xml.a contrib/poco-cmake/XML/lib_poco_xml_expat.a contrib/replxx-cmake/libreplxx.a contrib/cctz-cmake/libcctz.a -Wl,--whole-archive /project/jd/clickhouse/build/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/libfmt.a contrib/dragonbox-cmake/libdragonbox_to_chars.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/libcpuid.a contrib/cityhash102/libcityhash.a contrib/poco-cmake/Foundation/lib_poco_foundation.a contrib/poco-cmake/Foundation/lib_poco_foundation_pcre.a contrib/xz-cmake/libliblzma.a contrib/aws-s3-cmake/libaws_s3.a contrib/aws-s3-cmake/libaws_s3_checksums.a contrib/azure-cmake/libazure_sdk.a contrib/curl-cmake/libcurl.a contrib/brotli-cmake/libbrotli.a contrib/bzip2-cmake/libbzip2.a contrib/mariadb-connector-c-cmake/libmariadbclient.a contrib/boost-cmake/lib_boost_system.a contrib/icu-cmake/libicui18n.a contrib/icu-cmake/libicuuc.a contrib/icu-cmake/libicudata.a contrib/capnproto-cmake/libcapnpc.a contrib/capnproto-cmake/libcapnp.a contrib/capnproto-cmake/libkj.a contrib/arrow-cmake/libparquet_static.a contrib/arrow-cmake/libarrow_static.a contrib/boost-cmake/lib_boost_filesystem.a contrib/double-conversion-cmake/libdouble-conversion.a contrib/flatbuffers/libflatbuffers.a contrib/arrow-cmake/libthrift_static.a contrib/avro-cmake/libavrocpp.a contrib/boost-cmake/lib_boost_iostreams.a contrib/openldap-cmake/libldap_r.a contrib/openldap-cmake/liblber.a src/Server/grpc_protos/libclickhouse_grpc_protos.a contrib/grpc/libgrpc++.a contrib/grpc/libgrpc.a contrib/re2-cmake/libre2.a contrib/grpc/third_party/cares/cares/lib/libcares.a -lresolv -lsocket contrib/abseil-cpp/absl/status/libabsl_status.a contrib/grpc/libaddress_sorting.a contrib/grpc/libupb.a contrib/grpc/libgpr.a -ldl contrib/libhdfs3-cmake/libhdfs3.a contrib/protobuf-cmake/liblibprotobuf.a contrib/libgsasl-cmake/libgsasl.a contrib/krb5-cmake/libkrb5.a contrib/libxml2-cmake/liblibxml2.a contrib/s2geometry-cmake/libs2.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/strings/libabsl_str_format_internal.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/amqpcpp-cmake/libamqp-cpp.a contrib/libuv-cmake/libuv_a.a contrib/sqlite-cmake/libsqlite.a contrib/cassandra-cmake/libcassandra.a contrib/libuv-cmake/libuv.a -lrt contrib/rocksdb-cmake/librocksdb.a contrib/lz4-cmake/liblz4.a contrib/zstd-cmake/libzstd.a contrib/zlib-ng-cmake/libzlib.a contrib/snappy-cmake/libsnappy.a contrib/libpqxx-cmake/liblibpqxx.a contrib/libpq-cmake/liblibpq.a contrib/boringssl-cmake/libssl.a contrib/boringssl-cmake/libcrypto.a -lpthread contrib/boost-cmake/lib_boost_context.a contrib/libstemmer-c-cmake/libstemmer.a contrib/wordnet-blast-cmake/libwnb.a contrib/boost-cmake/lib_boost_graph.a contrib/boost-cmake/lib_boost_regex.a contrib/lemmagen-c-cmake/liblemmagen.a contrib/simdjson-cmake/libsimdjson.a contrib/consistent-hashing/libconsistent-hashing.a -Wl,--start-group base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-13/lib/clang/13.0.1/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-13: error: unable to find library -lsocket
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[9893/10168] Linking CXX executable utils/check-marks/check-marks
FAILED: utils/check-marks/check-marks
: && /usr/bin/clang++-13 --target=x86_64-linux-gnu --sysroot=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64/x86_64-linux-gnu/libc --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 -std=c++20 -fdiagnostics-color=always -include /project/jd/clickhouse/base/glibc-compatibility/glibc-compat-2.32.h -fsized-deallocation -pipe -mssse3 -msse4.1 -msse4.2 -mpclmul -mpopcnt -fasynchronous-unwind-tables -ffile-prefix-map=/project/jd/clickhouse=. -falign-functions=32 -Wall -Wno-unused-command-line-argument -fdiagnostics-absolute-paths -fstrict-vtable-pointers -fexperimental-new-pass-manager -Werror -Wextra -Wframe-larger-than=65536 -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wno-c11-extensions -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-ctad-maybe-unsupported -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -O3 -DNDEBUG --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --gcc-toolchain=/project/jd/clickhouse/cmake/linux/../../contrib/sysroot/linux-x86_64 --ld-path=/usr/bin/ld.lld-13 -rdynamic -Wl,--no-undefined -no-pie -Wl,-no-pie src/CMakeFiles/clickhouse_malloc.dir/Common/malloc.cpp.o utils/check-marks/CMakeFiles/check-marks.dir/main.cpp.o -o utils/check-marks/check-marks src/libclickhouse_new_delete.a src/libdbms.a contrib/boost-cmake/lib_boost_program_options.a contrib/llvm/llvm/lib/libLLVMExecutionEngine.a contrib/llvm/llvm/lib/libLLVMRuntimeDyld.a contrib/llvm/llvm/lib/libLLVMX86CodeGen.a contrib/llvm/llvm/lib/libLLVMCFGuard.a contrib/llvm/llvm/lib/libLLVMX86Desc.a contrib/llvm/llvm/lib/libLLVMX86Info.a contrib/llvm/llvm/lib/libLLVMAsmPrinter.a contrib/llvm/llvm/lib/libLLVMDebugInfoDWARF.a contrib/llvm/llvm/lib/libLLVMGlobalISel.a contrib/llvm/llvm/lib/libLLVMSelectionDAG.a contrib/llvm/llvm/lib/libLLVMMCDisassembler.a contrib/llvm/llvm/lib/libLLVMPasses.a contrib/llvm/llvm/lib/libLLVMCoroutines.a contrib/llvm/llvm/lib/libLLVMHelloNew.a contrib/llvm/llvm/lib/libLLVMObjCARCOpts.a contrib/llvm/llvm/lib/libLLVMCodeGen.a contrib/llvm/llvm/lib/libLLVMipo.a contrib/llvm/llvm/lib/libLLVMFrontendOpenMP.a contrib/llvm/llvm/lib/libLLVMIRReader.a contrib/llvm/llvm/lib/libLLVMAsmParser.a contrib/llvm/llvm/lib/libLLVMLinker.a contrib/llvm/llvm/lib/libLLVMBitWriter.a contrib/llvm/llvm/lib/libLLVMInstrumentation.a contrib/llvm/llvm/lib/libLLVMScalarOpts.a contrib/llvm/llvm/lib/libLLVMAggressiveInstCombine.a contrib/llvm/llvm/lib/libLLVMInstCombine.a contrib/llvm/llvm/lib/libLLVMVectorize.a contrib/llvm/llvm/lib/libLLVMTransformUtils.a contrib/llvm/llvm/lib/libLLVMTarget.a contrib/llvm/llvm/lib/libLLVMAnalysis.a contrib/llvm/llvm/lib/libLLVMProfileData.a contrib/llvm/llvm/lib/libLLVMObject.a contrib/llvm/llvm/lib/libLLVMTextAPI.a contrib/llvm/llvm/lib/libLLVMBitReader.a contrib/llvm/llvm/lib/libLLVMCore.a contrib/llvm/llvm/lib/libLLVMRemarks.a contrib/llvm/llvm/lib/libLLVMBitstreamReader.a contrib/llvm/llvm/lib/libLLVMMCParser.a contrib/llvm/llvm/lib/libLLVMMC.a contrib/llvm/llvm/lib/libLLVMBinaryFormat.a contrib/llvm/llvm/lib/libLLVMDebugInfoCodeView.a contrib/llvm/llvm/lib/libLLVMDebugInfoMSF.a contrib/llvm/llvm/lib/libLLVMSupport.a contrib/llvm/llvm/lib/libLLVMDemangle.a contrib/croaring-cmake/libroaring.a contrib/cppkafka-cmake/libcppkafka.a contrib/librdkafka-cmake/librdkafka.a contrib/cyrus-sasl-cmake/libsasl2.a contrib/nuraft-cmake/libnuraft.a contrib/boost-cmake/lib_boost_coroutine.a src/Common/Config/libclickhouse_common_config.a contrib/yaml-cpp-cmake/libyaml-cpp.a src/Common/ZooKeeper/libclickhouse_common_zookeeper.a src/Dictionaries/Embedded/libclickhouse_dictionaries_embedded.a src/Parsers/libclickhouse_parsers.a src/Access/Common/libclickhouse_common_access.a contrib/poco-cmake/MongoDB/lib_poco_mongodb.a src/Common/mysqlxx/libmysqlxx.a src/libclickhouse_common_io.a contrib/boost-cmake/lib_boost_program_options.a contrib/jemalloc-cmake/libjemalloc.a src/Common/StringUtils/libstring_utils.a base/widechar_width/libwidechar_width.a base/base/libcommon.a contrib/poco-cmake/Net/SSL/lib_poco_net_ssl.a contrib/poco-cmake/Net/lib_poco_net.a contrib/poco-cmake/Crypto/lib_poco_crypto.a contrib/poco-cmake/Util/lib_poco_util.a contrib/poco-cmake/JSON/lib_poco_json.a contrib/poco-cmake/JSON/lib_poco_json_pdjson.a contrib/poco-cmake/XML/lib_poco_xml.a contrib/poco-cmake/XML/lib_poco_xml_expat.a contrib/replxx-cmake/libreplxx.a contrib/cctz-cmake/libcctz.a -Wl,--whole-archive /project/jd/clickhouse/build/contrib/cctz-cmake/libtzdata.a -Wl,--no-whole-archive contrib/fmtlib-cmake/libfmt.a contrib/dragonbox-cmake/libdragonbox_to_chars.a contrib/re2-cmake/libre2_st.a contrib/libcpuid-cmake/libcpuid.a contrib/cityhash102/libcityhash.a contrib/poco-cmake/Foundation/lib_poco_foundation.a contrib/poco-cmake/Foundation/lib_poco_foundation_pcre.a contrib/xz-cmake/libliblzma.a contrib/aws-s3-cmake/libaws_s3.a contrib/aws-s3-cmake/libaws_s3_checksums.a contrib/azure-cmake/libazure_sdk.a contrib/curl-cmake/libcurl.a contrib/brotli-cmake/libbrotli.a contrib/bzip2-cmake/libbzip2.a contrib/mariadb-connector-c-cmake/libmariadbclient.a contrib/boost-cmake/lib_boost_system.a contrib/icu-cmake/libicui18n.a contrib/icu-cmake/libicuuc.a contrib/icu-cmake/libicudata.a contrib/capnproto-cmake/libcapnpc.a contrib/capnproto-cmake/libcapnp.a contrib/capnproto-cmake/libkj.a contrib/arrow-cmake/libparquet_static.a contrib/arrow-cmake/libarrow_static.a contrib/boost-cmake/lib_boost_filesystem.a contrib/double-conversion-cmake/libdouble-conversion.a contrib/flatbuffers/libflatbuffers.a contrib/arrow-cmake/libthrift_static.a contrib/avro-cmake/libavrocpp.a contrib/boost-cmake/lib_boost_iostreams.a contrib/openldap-cmake/libldap_r.a contrib/openldap-cmake/liblber.a src/Server/grpc_protos/libclickhouse_grpc_protos.a contrib/grpc/libgrpc++.a contrib/grpc/libgrpc.a contrib/re2-cmake/libre2.a contrib/grpc/third_party/cares/cares/lib/libcares.a -lresolv -lsocket contrib/abseil-cpp/absl/status/libabsl_status.a contrib/grpc/libaddress_sorting.a contrib/grpc/libupb.a contrib/grpc/libgpr.a -ldl contrib/libhdfs3-cmake/libhdfs3.a contrib/protobuf-cmake/liblibprotobuf.a contrib/libgsasl-cmake/libgsasl.a contrib/krb5-cmake/libkrb5.a contrib/libxml2-cmake/liblibxml2.a contrib/s2geometry-cmake/libs2.a contrib/abseil-cpp/absl/strings/libabsl_cord.a contrib/abseil-cpp/absl/strings/libabsl_cordz_info.a contrib/abseil-cpp/absl/strings/libabsl_cord_internal.a contrib/abseil-cpp/absl/strings/libabsl_cordz_functions.a contrib/abseil-cpp/absl/strings/libabsl_cordz_handle.a contrib/abseil-cpp/absl/container/libabsl_raw_hash_set.a contrib/abseil-cpp/absl/container/libabsl_hashtablez_sampler.a contrib/abseil-cpp/absl/profiling/libabsl_exponential_biased.a contrib/abseil-cpp/absl/synchronization/libabsl_synchronization.a contrib/abseil-cpp/absl/debugging/libabsl_stacktrace.a contrib/abseil-cpp/absl/debugging/libabsl_symbolize.a contrib/abseil-cpp/absl/debugging/libabsl_debugging_internal.a contrib/abseil-cpp/absl/debugging/libabsl_demangle_internal.a contrib/abseil-cpp/absl/synchronization/libabsl_graphcycles_internal.a contrib/abseil-cpp/absl/time/libabsl_time.a contrib/abseil-cpp/absl/time/libabsl_civil_time.a contrib/abseil-cpp/absl/time/libabsl_time_zone.a contrib/abseil-cpp/absl/base/libabsl_malloc_internal.a contrib/abseil-cpp/absl/hash/libabsl_hash.a contrib/abseil-cpp/absl/types/libabsl_bad_optional_access.a contrib/abseil-cpp/absl/hash/libabsl_city.a contrib/abseil-cpp/absl/types/libabsl_bad_variant_access.a contrib/abseil-cpp/absl/hash/libabsl_low_level_hash.a contrib/abseil-cpp/absl/strings/libabsl_str_format_internal.a contrib/abseil-cpp/absl/strings/libabsl_strings.a contrib/abseil-cpp/absl/numeric/libabsl_int128.a contrib/abseil-cpp/absl/base/libabsl_throw_delegate.a contrib/abseil-cpp/absl/strings/libabsl_strings_internal.a contrib/abseil-cpp/absl/base/libabsl_base.a contrib/abseil-cpp/absl/base/libabsl_spinlock_wait.a -lrt contrib/abseil-cpp/absl/base/libabsl_raw_logging_internal.a contrib/abseil-cpp/absl/base/libabsl_log_severity.a contrib/amqpcpp-cmake/libamqp-cpp.a contrib/libuv-cmake/libuv_a.a contrib/sqlite-cmake/libsqlite.a contrib/cassandra-cmake/libcassandra.a contrib/libuv-cmake/libuv.a -lrt contrib/rocksdb-cmake/librocksdb.a contrib/lz4-cmake/liblz4.a contrib/zstd-cmake/libzstd.a contrib/zlib-ng-cmake/libzlib.a contrib/snappy-cmake/libsnappy.a contrib/libpqxx-cmake/liblibpqxx.a contrib/libpq-cmake/liblibpq.a contrib/boringssl-cmake/libssl.a contrib/boringssl-cmake/libcrypto.a -lpthread contrib/boost-cmake/lib_boost_context.a contrib/libstemmer-c-cmake/libstemmer.a contrib/wordnet-blast-cmake/libwnb.a contrib/boost-cmake/lib_boost_graph.a contrib/boost-cmake/lib_boost_regex.a contrib/lemmagen-c-cmake/liblemmagen.a contrib/simdjson-cmake/libsimdjson.a contrib/consistent-hashing/libconsistent-hashing.a -Wl,--start-group base/glibc-compatibility/libglibc-compatibility.a base/glibc-compatibility/memcpy/libmemcpy.a contrib/libcxx-cmake/libcxx.a contrib/libcxxabi-cmake/libcxxabi.a contrib/libunwind-cmake/libunwind.a -Wl,--end-group -nodefaultlibs /usr/lib/llvm-13/lib/clang/13.0.1/lib/linux/libclang_rt.builtins-x86_64.a -lc -lm -lrt -lpthread -ldl && :
ld.lld-13: error: unable to find library -lsocket
clang: error: linker command failed with exit code 1 (use -v to see invocation)
[9941/10168] Building CXX object src/AggregateFunctions/CMakeFiles/clickhouse_aggregate_functions.dir/AggregateFunctionSumMap.cpp.o
ninja: build stopped: subcommand failed.
```
| https://github.com/ClickHouse/ClickHouse/issues/33345 | https://github.com/ClickHouse/ClickHouse/pull/33399 | 09f2cd1f95a56d71f0611432c80691b32ceda687 | 03eadbb3d82bc4ba3c93d5c7fd8735c6442acd82 | "2021-12-31T11:59:09Z" | c++ | "2022-01-04T20:16:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,344 | ["src/Interpreters/InterpreterAlterQuery.cpp", "src/Storages/AlterCommands.cpp", "src/Storages/AlterCommands.h", "src/Storages/IStorage.h", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/StorageFactory.h", "tests/queries/0_stateless/02184_storage_add_support_ttl.reference", "tests/queries/0_stateless/02184_storage_add_support_ttl.sql"] | Block TTL's for Distributed tables during alters | Right now clickhouse allows to create TTL columns for Distributed tables during alters. It is working normally until server restart. After that it won't start because engine distributed doesn't support TTL clause. Modify metadata is required.
My cluster has 21.3 but tested it also with 21.11 and the same behavior.
It is blocked during create but it should be also be blocked during alters.
Tested on table from tutorial
ENGINE = Distributed('stage, 'default', 'hits_v1')
alter table hits_v1_all modify column UTMCampaign String TTL EventDate + INTERVAL 1 MONTH ← it passes normally | https://github.com/ClickHouse/ClickHouse/issues/33344 | https://github.com/ClickHouse/ClickHouse/pull/33391 | 53f1cff96fb0743324d06b5f663f233a68e2ddc0 | 056500f2adaa7ef2fa9228aa1f8e851678ee33a6 | "2021-12-31T09:29:46Z" | c++ | "2022-04-19T13:47:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,323 | ["src/AggregateFunctions/AggregateFunctionIntervalLengthSum.h", "tests/queries/0_stateless/02158_interval_length_sum.reference", "tests/queries/0_stateless/02158_interval_length_sum.sql"] | intervalLengthSum(10, 5) UInt64 overflow | **How to reproduce**
```
SELECT
intervalLengthSum(10, 5) AS res,
toInt64(res) AS cast,
toTypeName(res) AS type
Query id: bc8307c2-a520-4499-959e-bb00e0bdf055
┌──────────────────res─┬─cast─┬─type───┐
│ 18446744073709551611 │ -5 │ UInt64 │
└──────────────────────┴──────┴────────┘
```
**Expected behavior**
intervals with negative sum will be converted to zero. | https://github.com/ClickHouse/ClickHouse/issues/33323 | https://github.com/ClickHouse/ClickHouse/pull/33335 | 5616000568a75f28c2a4df7c2d10bbffa1620507 | f80b83022f87d4bdefa2e51d2595a562f4279c8b | "2021-12-30T12:27:49Z" | c++ | "2022-01-04T16:36:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,297 | ["src/Interpreters/ActionsVisitor.cpp", "tests/queries/0_stateless/02160_untuple_exponential_growth.reference", "tests/queries/0_stateless/02160_untuple_exponential_growth.sh"] | This weird query is slow | **How to reproduce**
```
SELECT untuple(tuple(untuple((1, untuple((untuple(tuple(untuple(tuple(untuple((untuple((1, 1, 1, 1)), 1, 1, 1)))))), 1, 1))))))
```
Most likely due to very long column name. | https://github.com/ClickHouse/ClickHouse/issues/33297 | https://github.com/ClickHouse/ClickHouse/pull/33445 | b1d2bdf5699c3726dc7f6fd3bca744eb267c187a | d274c00ec7792068a7197d938887d193925a9ce5 | "2021-12-29T17:20:33Z" | c++ | "2022-01-07T11:23:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,289 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/InterpreterSelectWithUnionQuery.cpp", "src/Interpreters/SelectQueryOptions.h", "src/Storages/SelectQueryInfo.h", "src/Storages/StorageView.cpp", "tests/queries/0_stateless/02169_fix_view_offset_limit_setting.reference", "tests/queries/0_stateless/02169_fix_view_offset_limit_setting.sql"] | Settings limit/offset dose not work as expected on View | Given that vcounter is a View, the settings limit/offset dose not work as expected.
```sql
CREATE TABLE counter (id UInt64, createdAt DateTime) ENGINE = MergeTree() ORDER BY id;
INSERT INTO counter SELECT number, now() FROM numbers(500);
CREATE VIEW vcounter AS
SELECT
intDiv(id, 10) AS tens,
max(createdAt) AS maxid
FROM counter
GROUP BY tens;
SELECT *
FROM vcounter
ORDER BY tens ASC
LIMIT 100
SETTINGS limit = 6, offset = 5
┌─tens─┬───────────────maxid─┐
│ 47 │ 2021-12-29 13:05:23 │
└──────┴─────────────────────┘
1 rows in set. Elapsed: 0.006 sec.
```
The query should return 6 rows, instead of one.
**Does it reproduce on recent release?**
Yes, we can reproduce it on `21.10`.
**Expected behavior**
The query should return 6 rows, instead of one.
**Additional context**
Replace view with subquery works though,
```sql
SELECT *
FROM
(
SELECT
intDiv(id, 10) AS tens,
max(createdAt) AS maxid
FROM test.counter
GROUP BY tens
)
LIMIT 100
SETTINGS limit = 6, offset = 5;
┌─tens─┬───────────────maxid─┐
│ 3 │ 2021-12-29 13:05:23 │
│ 31 │ 2021-12-29 13:05:23 │
│ 47 │ 2021-12-29 13:05:23 │
│ 40 │ 2021-12-29 13:05:23 │
│ 30 │ 2021-12-29 13:05:23 │
│ 2 │ 2021-12-29 13:05:23 │
└──────┴─────────────────────┘
6 rows in set. Elapsed: 0.011 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/33289 | https://github.com/ClickHouse/ClickHouse/pull/33518 | 2088570a01cf22c8179b1efc754b2c430596960b | 6c71a7c40f1ea6816ab2b9ce42ba4290ff47cb83 | "2021-12-29T13:46:36Z" | c++ | "2022-01-12T20:39:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,270 | ["src/CMakeLists.txt", "src/Common/Config/CMakeLists.txt", "src/Common/Config/ConfigHelper.cpp", "src/Common/Config/ConfigHelper.h", "src/Common/tests/gtest_config_helper.cpp", "src/Interpreters/Cluster.cpp", "tests/integration/test_config_xml_full/configs/config.xml", "tests/integration/test_config_yaml_full/configs/config.yaml"] | Cannot write `<secure/>` instead of `<secure>1</secure>` in cluster configuration. | null | https://github.com/ClickHouse/ClickHouse/issues/33270 | https://github.com/ClickHouse/ClickHouse/pull/33330 | a3b30dce3239b631fa670a7373cbed5a0e02a4cb | 6941b072f4e559392cba142677423e2f82dd9b4b | "2021-12-28T21:58:37Z" | c++ | "2021-12-31T11:36:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,257 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Storages/IStorage.h", "src/Storages/StorageDistributed.h", "src/Storages/StorageMerge.cpp", "src/Storages/StorageMerge.h", "tests/queries/0_stateless/02156_storage_merge_prewhere.reference", "tests/queries/0_stateless/02156_storage_merge_prewhere.sql"] | Move to prewhere optimization should also work for Merge tables | **Describe the situation**
Currently it does not:
```
milovidov-desktop :) EXPLAIN SYNTAX SELECT sum(cityHash64(*)) AS x FROM test.hits WHERE URLDomain LIKE '%metrika%'
EXPLAIN SYNTAX
SELECT sum(cityHash64(*)) AS x
FROM test.hits
WHERE URLDomain LIKE '%metrika%'
Query id: a752ed76-a71c-4e26-96c1-1b081abfc539
┌─explain────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ SELECT sum(cityHash64(WatchID, JavaEnable, Title, GoodEvent, EventTime, EventDate, CounterID, ClientIP, ClientIP6, RegionID, UserID, CounterClass, OS, UserAgent, URL, Referer, URLDomain, RefererDomain, Refresh, IsRobot, RefererCategories, URLCategories, URLRegions, RefererRegions, ResolutionWidth, ResolutionHeight, ResolutionDepth, FlashMajor, FlashMinor, FlashMinor2, NetMajor, NetMinor, UserAgentMajor, UserAgentMinor, CookieEnable, JavascriptEnable, IsMobile, MobilePhone, MobilePhoneModel, Params, IPNetworkID, TraficSourceID, SearchEngineID, SearchPhrase, AdvEngineID, IsArtifical, WindowClientWidth, WindowClientHeight, ClientTimeZone, ClientEventTime, SilverlightVersion1, SilverlightVersion2, SilverlightVersion3, SilverlightVersion4, PageCharset, CodeVersion, IsLink, IsDownload, IsNotBounce, FUniqID, HID, IsOldCounter, IsEvent, IsParameter, DontCountHits, WithHash, HitColor, UTCEventTime, Age, Sex, Income, Interests, Robotness, GeneralInterests, RemoteIP, RemoteIP6, WindowName, OpenerName, HistoryLength, BrowserLanguage, BrowserCountry, SocialNetwork, SocialAction, HTTPError, SendTiming, DNSTiming, ConnectTiming, ResponseStartTiming, ResponseEndTiming, FetchTiming, RedirectTiming, DOMInteractiveTiming, DOMContentLoadedTiming, DOMCompleteTiming, LoadEventStartTiming, LoadEventEndTiming, NSToDOMContentLoadedTiming, FirstPaintTiming, RedirectCount, SocialSourceNetworkID, SocialSourcePage, ParamPrice, ParamOrderID, ParamCurrency, ParamCurrencyID, GoalsReached, OpenstatServiceName, OpenstatCampaignID, OpenstatAdID, OpenstatSourceID, UTMSource, UTMMedium, UTMCampaign, UTMContent, UTMTerm, FromTag, HasGCLID, RefererHash, URLHash, CLID, YCLID, ShareService, ShareURL, ShareTitle, `ParsedParams.Key1`, `ParsedParams.Key2`, `ParsedParams.Key3`, `ParsedParams.Key4`, `ParsedParams.Key5`, `ParsedParams.ValueDouble`, IslandID, RequestNum, RequestTry)) AS x │
│ FROM test.hits │
│ PREWHERE URLDomain LIKE '%metrika%' │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
3 rows in set. Elapsed: 0.004 sec.
milovidov-desktop :) EXPLAIN SYNTAX SELECT sum(cityHash64(*)) AS x FROM merge(test, '^hits$') WHERE URLDomain LIKE '%metrika%'
EXPLAIN SYNTAX
SELECT sum(cityHash64(*)) AS x
FROM merge(test, '^hits$')
WHERE URLDomain LIKE '%metrika%'
Query id: fb059673-7da1-4cf4-9713-e5c7b9feb66d
┌─explain────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ SELECT sum(cityHash64(WatchID, JavaEnable, Title, GoodEvent, EventTime, EventDate, CounterID, ClientIP, ClientIP6, RegionID, UserID, CounterClass, OS, UserAgent, URL, Referer, URLDomain, RefererDomain, Refresh, IsRobot, RefererCategories, URLCategories, URLRegions, RefererRegions, ResolutionWidth, ResolutionHeight, ResolutionDepth, FlashMajor, FlashMinor, FlashMinor2, NetMajor, NetMinor, UserAgentMajor, UserAgentMinor, CookieEnable, JavascriptEnable, IsMobile, MobilePhone, MobilePhoneModel, Params, IPNetworkID, TraficSourceID, SearchEngineID, SearchPhrase, AdvEngineID, IsArtifical, WindowClientWidth, WindowClientHeight, ClientTimeZone, ClientEventTime, SilverlightVersion1, SilverlightVersion2, SilverlightVersion3, SilverlightVersion4, PageCharset, CodeVersion, IsLink, IsDownload, IsNotBounce, FUniqID, HID, IsOldCounter, IsEvent, IsParameter, DontCountHits, WithHash, HitColor, UTCEventTime, Age, Sex, Income, Interests, Robotness, GeneralInterests, RemoteIP, RemoteIP6, WindowName, OpenerName, HistoryLength, BrowserLanguage, BrowserCountry, SocialNetwork, SocialAction, HTTPError, SendTiming, DNSTiming, ConnectTiming, ResponseStartTiming, ResponseEndTiming, FetchTiming, RedirectTiming, DOMInteractiveTiming, DOMContentLoadedTiming, DOMCompleteTiming, LoadEventStartTiming, LoadEventEndTiming, NSToDOMContentLoadedTiming, FirstPaintTiming, RedirectCount, SocialSourceNetworkID, SocialSourcePage, ParamPrice, ParamOrderID, ParamCurrency, ParamCurrencyID, GoalsReached, OpenstatServiceName, OpenstatCampaignID, OpenstatAdID, OpenstatSourceID, UTMSource, UTMMedium, UTMCampaign, UTMContent, UTMTerm, FromTag, HasGCLID, RefererHash, URLHash, CLID, YCLID, ShareService, ShareURL, ShareTitle, `ParsedParams.Key1`, `ParsedParams.Key2`, `ParsedParams.Key3`, `ParsedParams.Key4`, `ParsedParams.Key5`, `ParsedParams.ValueDouble`, IslandID, RequestNum, RequestTry)) AS x │
│ FROM merge('test', '^hits$') │
│ WHERE URLDomain LIKE '%metrika%' │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
Most likely because Merge table does not provide column sizes info.
We can provide it either:
- by iterating and summing across all tables;
- only for first N = 10 tables in case if the number of tables is very large. | https://github.com/ClickHouse/ClickHouse/issues/33257 | https://github.com/ClickHouse/ClickHouse/pull/33300 | 376709b249891b88f88d22b0a4da86379d018fbf | 46b9279d81cfc6cb160d42db14208f8d57e057dc | "2021-12-28T13:28:39Z" | c++ | "2022-01-10T12:15:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,251 | ["src/Databases/DatabaseDictionary.cpp", "src/Databases/DatabasesCommon.cpp", "src/Interpreters/InterpreterAlterQuery.cpp", "src/Storages/IStorage.cpp", "src/Storages/StorageDictionary.cpp", "src/Storages/StorageDictionary.h", "src/Storages/System/StorageSystemDictionaries.cpp", "tests/queries/0_stateless/02155_dictionary_comment.reference", "tests/queries/0_stateless/02155_dictionary_comment.sql"] | Received signal Segmentation fault (11) when ALTER TABLE for dictionary | Clickhouse server drops when I send ALTER TABLE request with COMMENT COLUMN for DICTIONARY.
I know the request is incorrect. But instead of returning the error, the server simply stops working.
Server version: (version 21.12.2.17 (official build))
Reproduce:
1. Create any dictionary. Example:
```sql
CREATE DICTIONARY stores (
storeId UInt64,
storeName String,
shopId UInt32,
status UInt8,
lat Float32,
lon Float32,
address String,
storeCountry String,
storeRegion String,
storeCity String,
city_id UInt32,
email String,
storeParams String
)
PRIMARY KEY storeId
SOURCE(MYSQL(
port 3306
user 'user'
password 'pass'
replica(host '10.20.20.12' priority 1)
db 'db'
table 'stores'
))
LAYOUT(FLAT())
LIFETIME(300);
```
2. Send ALTER TABLE for this dictionary. Example:
```sql
ALTER TABLE stores COMMENT COLUMN storeId 'any comment';
```
```
2021.12.28 09:17:05.578111 [ 251 ] {} <Fatal> BaseDaemon: ########################################
2021.12.28 09:17:05.582661 [ 251 ] {} <Fatal> BaseDaemon: (version 21.12.2.17 (official build), build id: 29EDA0CC01FD10E7) (from thread 96) (query_id: 780d68be-8e65-43c5-b454-0af77e7264a8) Received signal Segmentation fault (11)
2021.12.28 09:17:05.591963 [ 251 ] {} <Fatal> BaseDaemon: Address: 0x18 Access: read. Address not mapped to object.
2021.12.28 09:17:05.599694 [ 251 ] {} <Fatal> BaseDaemon: Stack trace: 0x12b47423 0x12b46d9b 0x12b251ca 0x1362c344 0x12f480b7 0x12f45d52 0x134cff90 0x134d341c 0x13d3426e 0x13d389c7 0x13f9b38a 0x16ee23af 0x16ee4801 0x16ff3589 0x16ff0c80 0x7f14c5f93609 0x7f14c5eba293
2021.12.28 09:17:05.607719 [ 251 ] {} <Fatal> BaseDaemon: 2. void DB::IAST::replace<DB::ASTExpressionList>(DB::ASTExpressionList*&, std::__1::shared_ptr<DB::IAST> const&) @ 0x12b47423 in /usr/bin/clickhouse
2021.12.28 09:17:05.614280 [ 251 ] {} <Fatal> BaseDaemon: 3. DB::applyMetadataChangesToCreateQuery(std::__1::shared_ptr<DB::IAST> const&, DB::StorageInMemoryMetadata const&) @ 0x12b46d9b in /usr/bin/clickhouse
2021.12.28 09:17:05.620176 [ 251 ] {} <Fatal> BaseDaemon: 4. DB::DatabaseOrdinary::alterTable(std::__1::shared_ptr<DB::Context const>, DB::StorageID const&, DB::StorageInMemoryMetadata const&) @ 0x12b251ca in /usr/bin/clickhouse
2021.12.28 09:17:05.629057 [ 251 ] {} <Fatal> BaseDaemon: 5. DB::IStorage::alter(DB::AlterCommands const&, std::__1::shared_ptr<DB::Context const>, std::__1::unique_lock<std::__1::timed_mutex>&) @ 0x1362c344 in /usr/bin/clickhouse
2021.12.28 09:17:05.634461 [ 251 ] {} <Fatal> BaseDaemon: 6. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x12f480b7 in /usr/bin/clickhouse
2021.12.28 09:17:05.644173 [ 251 ] {} <Fatal> BaseDaemon: 7. DB::InterpreterAlterQuery::execute() @ 0x12f45d52 in /usr/bin/clickhouse
2021.12.28 09:17:05.651961 [ 251 ] {} <Fatal> BaseDaemon: 8. ? @ 0x134cff90 in /usr/bin/clickhouse
2021.12.28 09:17:05.661756 [ 251 ] {} <Fatal> BaseDaemon: 9. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>, std::__1::optional<DB::FormatSettings> const&) @ 0x134d341c in /usr/bin/clickhouse
2021.12.28 09:17:05.673192 [ 251 ] {} <Fatal> BaseDaemon: 10. DB::HTTPHandler::processQuery(DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x13d3426e in /usr/bin/clickhouse
2021.12.28 09:17:05.682933 [ 251 ] {} <Fatal> BaseDaemon: 11. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x13d389c7 in /usr/bin/clickhouse
```
Full logs:
[clickhouse-server.log](https://github.com/ClickHouse/ClickHouse/files/7784075/clickhouse-server.log)
[clickhouse-server.err.log](https://github.com/ClickHouse/ClickHouse/files/7784077/clickhouse-server.err.log)
| https://github.com/ClickHouse/ClickHouse/issues/33251 | https://github.com/ClickHouse/ClickHouse/pull/33261 | 062f14cb578e294cf9e010e402181ea27ddc3ba4 | c3c8af747d38db3455134e33205eed233fa9ff54 | "2021-12-28T10:03:06Z" | c++ | "2021-12-29T09:43:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,028 | ["src/Functions/replicate.h", "tests/queries/0_stateless/02155_nested_lc_defalut_bug.reference", "tests/queries/0_stateless/02155_nested_lc_defalut_bug.sql"] | Type mismatch for nested Column that has type Array(LowCardinality(String)) | when I added a new column of the nested table using the type: Array(LowCardinality(String)), then insert data without this new column, it will throw an exception, the steps as follow:
- 1. Initial table
```
CREATE TABLE nested_test
(
`c1` UInt32,
`c2` DateTime,
`nest.col1` Array(String),
`nest.col2` Array(Int8)
)
ENGINE = MergeTree
ORDER BY (c1, c2);
```
- 2. populate some data before adding new element into the nested column structure
```
INSERT INTO nested_test (c1, c2, `nest.col1`, `nest.col2`)
SELECT * FROM generateRandom('c1 UInt32, c2 DateTime, `nest.col1` Array(String), `nest.col2` Array(Int8)') LIMIT 1000000;
```
- 3. adding new element
```
ALTER TABLE nested_test
ADD COLUMN `nest.col3` Array(LowCardinality(String));
```
- 4. try to add the same data:
```
INSERT INTO nested_test (c1, c2, `nest.col1`, `nest.col2`)
SELECT * FROM generateRandom('c1 UInt32, c2 DateTime, `nest.col1` Array(String), `nest.col2` Array(Int8)') LIMIT 1000000;
```
**DB::Exception:**
- CH version 21.8.12 (the same exception in version 21.12.2.17):
Received exception from server (version 21.8.12):
Code: 53. DB::Exception: Received from localhost:9000. DB::Exception: Type mismatch for column nest.col3. Column has type Array(LowCardinality(String)), got type Array(String).
- CH version 21.11.6:
Received exception from server (version 21.11.6):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Block structure mismatch in function connect between ConvertingTransform and MergeTreeSink stream: different types:
nest.col3 Array(String) Array(size = 0, UInt64(size = 0), String(size = 0))
nest.col3 Array(LowCardinality(String)) Array(size = 0, UInt64(size = 0), ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1)))). (LOGICAL_ERROR)
> when I use Array(String) instead of Array(LowCardinality(String)), insert successfully, or I added the new column with default value:
```
ALTER TABLE nested_test ADD COLUMN `nest.col3` Array(LowCardinality(String)) default arrayResize(CAST([], 'Array(LowCardinality(String))'), length(`nest.col1`));
``` | https://github.com/ClickHouse/ClickHouse/issues/33028 | https://github.com/ClickHouse/ClickHouse/pull/33504 | 8aa930b52f0209f7456ef0763e33805f20d6c932 | eb65175b6bc24079e77ea04578545d4216a2e71e | "2021-12-21T23:56:12Z" | c++ | "2022-01-11T09:15:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 33,009 | ["src/IO/S3/PocoHTTPClient.cpp"] | gcs (s3 from google): `The request signature we calculated does not match the signature you provided.` in case of `:` in object path | > You have to provide the following information whenever possible.
In case s3 object has `:` in it's path, ClickHouse fails to request it from gcs because of mismatched signature.
**Describe what's wrong**
Clickhouse 21.8
```
executeQuery: Code: 499, e.displayText() = DB::Exception: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.: While executing S3 (version 21.8.11.4) (from 127.0.0.1:47668) (in query: SELECT * FROM s3('https://storage.googleapis.com/BUCKET/PATH/*/*', 'XXXXXXXXX', 'XXXXXXXX' ,TSV, 'raw String');), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8fd379a in /usr/bin/clickhouse
1. DB::ReadBufferFromS3::initialize() @ 0xff8190b in /usr/bin/clickhouse
2. DB::ReadBufferFromS3::nextImpl() @ 0xff80b1b in /usr/bin/clickhouse
3. DB::TabSeparatedRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0x111bcfb9 in /usr/bin/clickhouse
4. DB::IRowInputFormat::generate() @ 0x110e51c8 in /usr/bin/clickhouse
5. DB::ISource::tryGenerate() @ 0x1106f235 in /usr/bin/clickhouse
6. DB::ISource::work() @ 0x1106ee1a in /usr/bin/clickhouse
7. DB::InputStreamFromInputFormat::readImpl() @ 0xe45e8ff in /usr/bin/clickhouse
8. DB::IBlockInputStream::read() @ 0xfdd05e7 in /usr/bin/clickhouse
9. DB::StorageS3Source::generate() @ 0x10b367b6 in /usr/bin/clickhouse
10. DB::ISource::tryGenerate() @ 0x1106f235 in /usr/bin/clickhouse
11. DB::ISource::work() @ 0x1106ee1a in /usr/bin/clickhouse
12. DB::SourceWithProgress::work() @ 0x112402db in /usr/bin/clickhouse
13. ? @ 0x110a98dd in /usr/bin/clickhouse
14. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x110a6471 in /usr/bin/clickhouse
15. ? @ 0x110aaf16 in /usr/bin/clickhouse
16. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x901441f in /usr/bin/clickhouse
17. ? @ 0x9017d03 in /usr/bin/clickhouse
18. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
19. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
0 rows in set. Elapsed: 0.223 sec.
Received exception from server (version 21.8.11):
Code: 499. DB::Exception: Received from localhost:9000. DB::Exception: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.: While executing S3.
[chi--0-0-0] 2021.11.23 18:45:48.195319 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Debug> AWSClient: Make request to: https://storage.googleapis.com/BUCKET/PATH/2021-11-23/2021-11-23_16:52:00_000000000000.csv
[chi--0-0-0] 2021.11.23 18:45:48.195336 [ 181 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Trace> AWSClient: Request Successfully signed
[chi--0-0-0] 2021.11.23 18:45:48.195365 [ 181 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Debug> AWSClient: Make request to: https://storage.googleapis.com/BUCKET/PATH/2021-11-23/2021-11-23_14:23:51_000000000000.csv
[chi--0-0-0] 2021.11.23 18:45:48.203754 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Trace> AWSClient: Receiving response...
[chi--0-0-0] 2021.11.23 18:45:48.203752 [ 181 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Trace> AWSClient: Receiving response...
[chi--0-0-0] 2021.11.23 18:45:48.274493 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Debug> AWSClient: Response status: 403, Forbidden
[chi--0-0-0] 2021.11.23 18:45:48.274591 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Debug> AWSClient: Received headers: X-GUploader-UploadID: XXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXXXX-2rA; Content-Type: application/xml; charset=UTF-8; Content-Length: 901; Date: Tue, 23 Nov 2021 18:45:48 GMT; Expires: Tue, 23 Nov 2021 18:45:48 GMT; Cache-Control: private, max-age=0; Server: UploadServer; Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"; Connection: close;
[chi--0-0-0] 2021.11.23 18:45:48.274655 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Trace> AWSClient: Request returned error. Attempting to generate appropriate error codes from response
[chi--0-0-0] 2021.11.23 18:45:48.274738 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Trace> AWSClient: AWSErrorMarshaller: Error response is <?xml version="1.0"?>
```
It clearly visible that provided creds are correct (`s3(...,'XXXXXXXXX', 'XXXXXXXX')`), because ClickHouse was able to execute listObjects api call and tried to get specific object.
```
[chi--0-0-0] 2021.11.23 18:45:48.195319 [ 163 ] {26e03d2b-2a70-4562-a252-8d739d3b3b59} <Debug> AWSClient: Make request to: https://storage.googleapis.com/BUCKET/PATH/2021-11-23/2021-11-23_16:52:00_000000000000.csv
``` | https://github.com/ClickHouse/ClickHouse/issues/33009 | https://github.com/ClickHouse/ClickHouse/pull/37344 | 86afa3a24511e4f202e1134d04c73341c5782003 | 8ba865bb608fcade2a6d663e0179c3c89b85f3f0 | "2021-12-21T14:27:00Z" | c++ | "2022-05-26T21:58:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,969 | ["src/Columns/ColumnFunction.h", "src/Columns/MaskOperations.cpp", "src/Columns/MaskOperations.h", "src/Functions/FunctionsLogical.cpp", "src/Functions/if.cpp", "src/Functions/multiIf.cpp", "src/Functions/throwIf.cpp", "tests/queries/0_stateless/02152_short_circuit_throw_if.reference", "tests/queries/0_stateless/02152_short_circuit_throw_if.sql"] | Conditional functions with constant expression as condition should evaluate only one branch | For example, if I use UInt8 constant for if condition then only one branch is evaluated:
```sql
SELECT if(1, 'success', throwIf(1, 'Executing FALSE branch'))
┌─'success'─┐
│ success │
└───────────┘
1 rows in set. Elapsed: 0.003 sec.
```
But if I use some constant expression, then both branches are evaluated:
```sql
SELECT if(empty(''), 'success', throwIf(1, 'Executing FALSE branch'))
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.12.2):
Code: 395. DB::Exception: Received from localhost:9000. DB::Exception: Executing FALSE branch: While processing if(empty(''), 'success', throwIf(1, 'Executing FALSE branch')). (FUNCTION_THROW_IF_VALUE_IS_NON_ZERO)
``` | https://github.com/ClickHouse/ClickHouse/issues/32969 | https://github.com/ClickHouse/ClickHouse/pull/32973 | e61d3eef0ce8f6efbbc4f95154e706bd45750fd2 | daa23a2827dc415e97e1903f32354205cf68df08 | "2021-12-20T09:41:58Z" | c++ | "2021-12-20T16:50:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,964 | ["src/Storages/MergeTree/MergeTreePartsMover.cpp"] | modification_time not correct after move part operation | **Describe what's wrong**
`system.parts` -> `modification_time` was not correct after move part operation.
**Does it reproduce on recent release?**
The latest release: v21.9.4.35-stable
**How to reproduce**
* Clickhouse version: v21.9.4.35-stable
* Create a MergeTree family table.
```sql
CREATE TABLE t
(
a UInt32,
b Date
)
ENGINE = MergeTree()
PARTITION BY b
ORDER BY b
```
* Write some data to table `t`
* Move a data part of `t` to another disk.
```sql
ALTER TABLE t MOVE PART 'part name' TO DISK 'some disk'
```
* Select part info from `system.parts`.
```sql
SELECT modification_time FROM system.parts WHERE table = 't' AND name = 'part name'
```
* We got this
|modification_time|
|:-:|
|1970-01-01 00:00:00|
**Expected behavior**
We should get the correct `modification_time` but we got '1970-01-01 00:00:00', seems that the `modification_time` had not been initialized after move part operation. | https://github.com/ClickHouse/ClickHouse/issues/32964 | https://github.com/ClickHouse/ClickHouse/pull/32965 | 2ac789747079350cd5130d7e51e90ee17ecefb76 | 26ea47982eb371ad337f1dd6dd5672083ced3f3d | "2021-12-20T08:18:14Z" | c++ | "2021-12-20T21:05:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,937 | ["src/Functions/IFunction.cpp", "tests/queries/0_stateless/01780_column_sparse_pk.reference", "tests/queries/0_stateless/01780_column_sparse_pk.sql"] | ./src/Common/assert_cast.h:50:12: runtime error: downcast of address 0x7f8af4055270 which does not point to an object of type 'const DB::ColumnNullable' | https://s3.amazonaws.com/clickhouse-test-reports/0/6bd7e425c641417b9235ebf6d3ac876118f06fbe/fuzzer_astfuzzerubsan,actions//report.html
It's related to `ColumnSparse`. | https://github.com/ClickHouse/ClickHouse/issues/32937 | https://github.com/ClickHouse/ClickHouse/pull/33146 | 47d50c3bd130c21ec604eaee2c8157493a239bb3 | 4d99f1016daa0de6af8ef85192d0b6f2a1552853 | "2021-12-18T20:00:24Z" | c++ | "2021-12-25T03:28:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,804 | ["src/Interpreters/InterpreterInsertQuery.cpp", "src/Interpreters/InterpreterInsertQuery.h", "src/Interpreters/executeQuery.cpp", "tests/queries/0_stateless/02156_async_insert_query_log.reference", "tests/queries/0_stateless/02156_async_insert_query_log.sh"] | No query_log for inserts if async_insert used | If we are enabling async_insert, query log missing all insert queries.
We are using log for monitoring, too (inserts time, inserts frequency etc), and with async_insert query_log is empty. Maybe log can be written anyway? | https://github.com/ClickHouse/ClickHouse/issues/32804 | https://github.com/ClickHouse/ClickHouse/pull/33239 | f18223f51ef4e1f5ae1a0f1259c28b924d626b6d | 9b63fa69496c434fde76779441e6f706e1ffe367 | "2021-12-15T12:29:16Z" | c++ | "2021-12-29T06:34:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,777 | ["src/Functions/ReplaceRegexpImpl.h", "tests/queries/0_stateless/02150_replace_regexp_all_empty_match.reference", "tests/queries/0_stateless/02150_replace_regexp_all_empty_match.sql", "tests/queries/0_stateless/02151_replace_regexp_all_empty_match_alternative.reference", "tests/queries/0_stateless/02151_replace_regexp_all_empty_match_alternative.sql"] | replaceRegexpAll has different behavior in 21 vs 20, and doesn't match the documentation | **Describe the unexpected behaviour**
I expect
```sql
select replaceRegexpAll('Hello, World!', '^', 'here: ')
```
to produce
```
here: Hello, World!
```
in v20, this is the case, and in v21 the result is:
```
here: ello, World!
```
**How to reproduce**
Run the above query in clickhouse v20, then in v21 | https://github.com/ClickHouse/ClickHouse/issues/32777 | https://github.com/ClickHouse/ClickHouse/pull/32945 | 1141ae91d895866ccaefb4e855a9032397b4b944 | 2e5a14a8de802019fdc7cf5844d65c2f331883d3 | "2021-12-14T22:17:38Z" | c++ | "2021-12-19T22:42:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,773 | ["base/base/defines.h", "src/Daemon/BaseDaemon.cpp"] | TSan fails to unmap memory and gets SIGILL, server crashes with SIGSEGV | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/0/2cf545642350f4395c42699d45db5700c3e28743/stress_test__thread__actions_.html\r\n\r\nserver log:\r\n```\r\n/var/log/clickhouse-server/clickhouse-server.log.2:2021.12.14 22:12:47.259360 [ 15452 ] {} <Fatal> BaseDaemon: ########################################\r\n/var/log/clickhouse-server/clickhouse-server.log.2:2021.12.14 22:12:47.266199 [ 15452 ] {} <Fatal> BaseDaemon: (version 21.13.1.433, build id: 452363989B152D94) (from thread 5352) (query_id: 3030a300-7f92-4701-adf0-d9fe3d85c8c6) Received signal Unknown signal (-3)\r\n/var/log/clickhouse-server/clickhouse-server.log.2:2021.12.14 22:12:47.270865 [ 15452 ] {} <Fatal> BaseDaemon: Sanitizer trap.\r\n/var/log/clickhouse-server/clickhouse-server.log.2:2021.12.14 22:12:47.287707 [ 15452 ] {} <Fatal> BaseDaemon: Stack trace: 0x9e55793 0x15811586 0x9d7c2f6 0x9df4189 0x9deb126 0x9deba21 0x9d93859 0x9e73bd7 0x9e738f8 0x15b793ed 0x15de559d 0x15d9dd26 0x173de074 0x173dcdbb 0x17a2b04b 0x17a30eda 0x17a2f400 0x17a21ac0 0x17a22bd7 0x17a211c9 0x176c8a35 0x176c84f3 0x17967dca 0x176ef6cf 0x176e16c1 0x176e28b5 0x9e7a5ce 0x9e7dd91 0x9d94e3d 0x7f8d20bb3609 0x7f8d20ada293\r\n/var/log/clickhouse-server/clickhouse-server.log.2:2021.12.14 22:12:57.025441 [ 513 ] {} <Fatal> Application: Child process was terminated by signal 11.\r\n```\r\n\r\ngdb log:\r\n```\r\n2021-12-14 22:12:47 Thread 23 \"MergeMutate\" received signal SIGILL, Illegal instruction.\r\n2021-12-14 22:12:47 [Switching to Thread 0x7f8c8d8a5700 (LWP 547)]\r\n2021-12-14 22:12:47 0x0000000009d7c3f8 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ()\r\n2021-12-14 22:12:47 #0 0x0000000009d7c3f8 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #1 0x0000000009d76dcf in __sanitizer::UnmapOrDie(void*, unsigned long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #2 0x0000000009d7045b in __sanitizer::ReadFileToBuffer(char const*, char**, unsigned long*, unsigned long*, unsigned long, int*) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #3 0x0000000009d79b94 in __sanitizer::ReadProcMaps(__sanitizer::ProcSelfMapsBuff*) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #4 0x0000000009d792ca in __sanitizer::MemoryMappingLayout::MemoryMappingLayout(bool) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #5 0x0000000009d7d3ad in __sanitizer::ListOfModules::fallbackInit() ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #6 0x0000000009d83d41 in __sanitizer::Symbolizer::FindModuleForAddress(unsigned long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #7 0x0000000009d838f8 in __sanitizer::Symbolizer::SymbolizePC(unsigned long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #8 0x0000000009e0018e in __tsan::SymbolizeStack(__sanitizer::StackTrace) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #9 0x0000000009e03c6d in __tsan::PrintCurrentStackSlow(unsigned long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #10 0x0000000009df0966 in __tsan::CheckUnwind() ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #11 0x0000000009d7c3e5 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #12 0x0000000009d76dcf in __sanitizer::UnmapOrDie(void*, unsigned long) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #13 0x0000000009deb92e in __tsan::user_free(__tsan::ThreadState*, unsigned long, void*, bool) ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #14 0x0000000009d93e0b in free ()\r\n2021-12-14 22:12:47 No symbol table info available.\r\n2021-12-14 22:12:47 #15 0x0000000009e73ff2 in Allocator<false, false>::freeNoTrack (this=this@entry=0x7b24009e6fe0, buf=buf@entry=0x7f8978364000, size=size@entry=1048591) at ../src/Common/Allocator.h:257\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #16 0x0000000009e73e98 in Allocator<false, false>::free (this=0x7b24009e6fe0, buf=0x7f8978364000, size=1048591) at ../src/Common/Allocator.h:105\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #17 0x0000000009e812f0 in DB::Memory<Allocator<false, false> >::dealloc (this=0x7b24009e6fe0) at ../src/IO/BufferWithOwnMemory.h:136\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #18 DB::Memory<Allocator<false, false> >::~Memory (this=0x7b24009e6fe0) at ../src/IO/BufferWithOwnMemory.h:48\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #19 DB::BufferWithOwnMemory<DB::WriteBuffer>::~BufferWithOwnMemory (this=<optimized out>) at ../src/IO/BufferWithOwnMemory.h:146\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #20 DB::WriteBufferFromFileDescriptor::~WriteBufferFromFileDescriptor (this=<optimized out>) at ../src/IO/WriteBufferFromFileDescriptor.cpp:98\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #21 0x0000000009f50600 in DB::WriteBufferFromFile::~WriteBufferFromFile (this=0x7b24009e6fa0) at ../src/IO/WriteBufferFromFile.cpp:76\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #22 DB::WriteBufferFromFile::~WriteBufferFromFile (this=0x7b24009e6fa0) at ../src/IO/WriteBufferFromFile.cpp:73\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #23 0x00000000171e641e in std::__1::default_delete<DB::WriteBufferFromFileBase>::operator() (__ptr=0x7b24009e6fa0, this=<optimized out>) at ../contrib/libcxx/include/memory:1397\r\n2021-12-14 22:12:47 No locals.\r\n2021-12-14 22:12:47 #24 std::__1::unique_ptr<DB::WriteBufferFromFileBase, std::__1::default_delete<DB::WriteBufferFromFileBase> >::reset (__p=0x0, this=<optimized out>) at ../contrib/libcxx/include/memory:1658\r\n2021-12-14 22:12:47 __tmp = 0x7b24009e6fa0\r\n2021-12-14 22:12:47 #25 std::__1::unique_ptr<DB::WriteBufferFromFileBase, std::__1::default_delete<DB::WriteBufferFromFileBase> >::~unique_ptr (this=<optimized out>) at ../contrib/libcxx/include/memory:1612\r\n2021-12-14 22:12:48 No locals.\r\n2021-12-14 22:12:48 #26 DB::DataPartsExchange::Fetcher::downloadBaseOrProjectionPartToDisk (this=this@entry=0x7b84002618a8, replica_path=..., part_download_path=..., sync=false, disk=..., in=..., checksums=..., throttler=...) at ../src/Storages/MergeTree/DataPartsExchange.cpp:678\r\n2021-12-14 22:12:48 i = 5\r\n2021-12-14 22:12:48 files = <optimized out>\r\n2021-12-14 22:12:48 #27 0x00000000171e4e83 in DB::DataPartsExchange::Fetcher::downloadPartToDisk (this=this@entry=0x7b84002618a8, part_name=..., replica_path=..., to_detached=false, tmp_prefix_=..., sync=false, disk=..., in=..., projections=0, checksums=..., throttler=...) at ../src/Storages/MergeTree/DataPartsExchange.cpp:734\r\n2021-12-14 22:12:49 TMP_PREFIX = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x637465662d706d74 <error: Cannot access memory at address 0x637465662d706d74>, __size_ = 24424, __cap_ = 720575940379279360}, __s = {__data_ = \"tmp-fetch_\", '\\000' <repeats 12 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 10 '\\n'}}, __r = {__words = {7166464396576714100, 24424, 720575940379279360}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 tmp_prefix = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x637465662d706d74 <error: Cannot access memory at address 0x637465662d706d74>, __size_ = 24424, __cap_ = 720575940379279360}, __s = {__data_ = \"tmp-fetch_\", '\\000' <repeats 12 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 10 '\\n'}}, __r = {__words = {7166464396576714100, 24424, 720575940379279360}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 part_relative_path = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b0c00264ff0 \"tmp-fetch_all_47_47_0_55\", __size_ = 24, __cap_ = 9223372036854775856}, __s = {__data_ = \"\\360O&\\000\\f{\\000\\000\\030\\000\\000\\000\\000\\000\\000\\000\\060\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 128 '\\200'}}, __r = {__words = {135291472334832, 24, 9223372036854775856}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 part_download_path = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b18008a12c0 \"store/0d5/0d56dc9c-9798-41e7-8d56-dc9c979831e7/tmp-fetch_all_47_47_0_55/\", __size_ = 72, __cap_ = 9223372036854775904}, __s = {__data_ = \"\\300\\022\\212\\000\\030{\\000\\000H\\000\\000\\000\\000\\000\\000\\000`\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 128 '\\200'}}, __r = {__words = {135343018480320, 72, 9223372036854775904}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 volume = {__ptr_ = 0x0, __cntrl_ = 0x0}\r\n2021-12-14 22:12:49 new_data_part = {__ptr_ = 0xc8c1500000000000, __cntrl_ = 0x7f88ca723054}\r\n2021-12-14 22:12:49 sync_guard = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::ISyncGuard*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::ISyncGuard>, 1, true>> = {<std::__1::default_delete<DB::ISyncGuard>> = {<No data fields>}, <No data fields>}, <No data fields>}}\r\n2021-12-14 22:12:49 metric_increment = {what = <optimized out>, amount = 1}\r\n2021-12-14 22:12:49 #28 0x00000000171dff4b in DB::DataPartsExchange::Fetcher::fetchPart (this=0x0, metadata_snapshot=..., context=..., part_name=..., replica_path=..., host=..., port=9009, timeouts=..., user=..., password=..., interserver_scheme=..., throttler=..., to_detached=false, tmp_prefix_=..., tagger_ptr=0x7f8c8d85cbb0, try_zero_copy=<optimized out>, disk=...) at ../src/Storages/MergeTree/DataPartsExchange.cpp:549\r\n2021-12-14 22:12:49 part_info = {partition_id = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x6c6c61 \"F\\326\\025\", __size_ = 0, __cap_ = 216172782113783808}, __s = {__data_ = \"all\", '\\000' <repeats 19 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 3 '\\003'}}, __r = {__words = {7105633, 0, 216172782113783808}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, min_block = 47, max_block = 47, level = 0, mutation = 55, use_leagcy_max_level = false, static MAX_LEVEL = 999999999, static MAX_BLOCK_NUMBER = 999999999, static LEGACY_MAX_LEVEL = 4294967295}\r\n2021-12-14 22:12:49 data_settings = {__ptr_ = 0x7b68001bfc00, __cntrl_ = 0x7b08004a0080}\r\n2021-12-14 22:12:49 uri = {static RESERVED_PATH = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x233f <error: Cannot access memory at address 0x233f>, __size_ = 0, __cap_ = 144115188075855872}, __s = {__data_ = \"?#\", '\\000' <repeats 20 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 2 '\\002'}}, __r = {__words = {9023, 0, 144115188075855872}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_QUERY = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x402b3b3a2f233f <error: Cannot access memory at address 0x402b3b3a2f233f>, __size_ = 0, __cap_ = 504403158265495552}, __s = {__data_ = \"?#/:;+@\", '\\000' <repeats 15 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 7 '\\a'}}, __r = {__words = {18061931888714559, 0, 504403158265495552}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_QUERY_PARAM = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x26402b3b3a2f233f <error: Cannot access memory at address 0x26402b3b3a2f233f>, __size_ = 61, __cap_ = 648518346341351424}, __s = {__data_ = \"?#/:;+@&=\", '\\000' <repeats 13 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 9 '\\t'}}, __r = {__words = {2756250505329976127, 61, 648518346341351424}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_FRAGMENT = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static ILLEGAL = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x225c7c7d7b3e3c25 <error: Cannot access memory at address 0x225c7c7d7b3e3c25>, __size_ = 2605658008086208606, __cap_ = 1369094286726748972}, __s = {__data_ = \"%<>{}|\\\\\\\"^`!*'()$,[]\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 19 '\\023'}}, __r = {__words = {2475990773527362597, 2605658008086208606, 1369094286726748972}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _scheme = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x70747468 <error: Cannot access memory at address 0x70747468>, __size_ = 140722268654996, __cap_ = 288230376151711748}, __s = {__data_ = \"http\\000\\000\\000\\000\\224\\315\\325t\\374\\177\\000\\000\\004\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 4 '\\004'}}, __r = {__words = {1886680168, 140722268654996, 288230376151711748}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _userInfo = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _host = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x3732616536656131 <error: Cannot access memory at address 0x3732616536656131>, __size_ = 876176484, __cap_ = 864691128455135232}, __s = {__data_ = \"1ae6ea27dd94\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 12 '\\f'}}, __r = {__words = {3977348508253774129, 876176484, 864691128455135232}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _port = 9009, _path = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _query = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b480065c400 \"endpoint=DataPartsExchange%3A%2Fclickhouse%2Ftables%2F01079_parallel_alter_modify_zookeeper_long_test_7ts6hs%2Fconcurrent_alter_mt%2Freplicas%2F4&part=all_47_47_0_55&client_protocol_version=7&compress\"..., __size_ = 206, __cap_ = 9223372036854776192}, __s = {__data_ = \"\\000\\304e\\000H{\\000\\000\\316\\000\\000\\000\\000\\000\\000\\000\\200\\001\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 128 '\\200'}}, __r = {__words = {135549174531072, 206, 9223372036854776192}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _fragment = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}}\r\n2021-12-14 22:12:49 capability = {<std::__1::__vector_base<std::__1::string, std::__1::allocator<std::__1::string> >> = {<std::__1::__vector_base_common<true>> = {<No data fields>}, __begin_ = 0x0, __end_ = 0x0, __end_cap_ = {<std::__1::__compressed_pair_elem<std::__1::string*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::string>, 1, true>> = {<std::__1::allocator<std::__1::string>> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}\r\n2021-12-14 22:12:49 creds = {static SCHEME = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x6369736142 <error: Cannot access memory at address 0x6369736142>, __size_ = 0, __cap_ = 360287970189639680}, __s = {__data_ = \"Basic\", '\\000' <repeats 17 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 5 '\\005'}}, __r = {__words = {426970931522, 0, 360287970189639680}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _username = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _password = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}}\r\n2021-12-14 22:12:49 in = {<DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatablePooledSession> >> = {<DB::SeekableReadBufferWithSize> = {<DB::SeekableReadBuffer> = {<DB::ReadBuffer> = {<DB::BufferBase> = {pos = 0x7f88ca723323 \"\\vprimary.idx\\020\", bytes = 0, working_buffer = {begin_pos = 0x7f88ca723000 \"\\335\", end_pos = 0x7f88ca723357 \"\"}, internal_buffer = {begin_pos = 0x7f88ca723000 \"\\335\", end_pos = 0x7f88ca723357 \"\"}, padded = false}, _vptr$ReadBuffer = 0x54b40d8 <vtable for DB::PooledReadWriteBufferFromHTTP+16>, nextimpl_working_buffer_offset = 0}, <No data fields>}, read_type = DB::SeekableReadBufferWithSize::ReadType::DEFAULT, file_size = {<std::__1::__optional_move_assign_base<unsigned long, true>> = {<std::__1::__optional_copy_assign_base<unsigned long, true>> = {<std::__1::__optional_move_base<unsigned long, true>> = {<std::__1::__optional_copy_base<unsigned long, true>> = {<std::__1::__optional_storage_base<unsigned long, false>> = {<std::__1::__optional_destruct_base<unsigned long, true>> = {{__null_state_ = 0 '\\000', __val_ = 58503346544384}, __engaged_ = false}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <std::__1::__sfinae_ctor_base<true, true>> = {<No data fields>}, <std::__1::__sfinae_assign_base<true, true>> = {<No data fields>}, <No data fields>}}, uri = {static RESERVED_PATH = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x233f <error: Cannot access memory at address 0x233f>, __size_ = 0, __cap_ = 144115188075855872}, __s = {__data_ = \"?#\", '\\000' <repeats 20 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 2 '\\002'}}, __r = {__words = {9023, 0, 144115188075855872}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_QUERY = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x402b3b3a2f233f <error: Cannot access memory at address 0x402b3b3a2f233f>, __size_ = 0, __cap_ = 504403158265495552}, __s = {__data_ = \"?#/:;+@\", '\\000' <repeats 15 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 7 '\\a'}}, __r = {__words = {18061931888714559, 0, 504403158265495552}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_QUERY_PARAM = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x26402b3b3a2f233f <error: Cannot access memory at address 0x26402b3b3a2f233f>, __size_ = 61, __cap_ = 648518346341351424}, __s = {__data_ = \"?#/:;+@&=\", '\\000' <repeats 13 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 9 '\\t'}}, __r = {__words = {2756250505329976127, 61, 648518346341351424}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_FRAGMENT = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static ILLEGAL = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x225c7c7d7b3e3c25 <error: Cannot access memory at address 0x225c7c7d7b3e3c25>, __size_ = 2605658008086208606, __cap_ = 1369094286726748972}, __s = {__data_ = \"%<>{}|\\\\\\\"^`!*'()$,[]\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 19 '\\023'}}, __r = {__words = {2475990773527362597, 2605658008086208606, 1369094286726748972}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _scheme = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x70747468 <error: Cannot access memory at address 0x70747468>, __size_ = 140722268654996, __cap_ = 288230376151711748}, __s = {__data_ = \"http\\000\\000\\000\\000\\224\\315\\325t\\374\\177\\000\\000\\004\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 4 '\\004'}}, __r = {__words = {1886680168, 140722268654996, 288230376151711748}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _userInfo = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _host = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x3732616536656131 <error: Cannot access memory at address 0x3732616536656131>, __size_ = 876176484, __cap_ = 864691128455135232}, __s = {__data_ = \"1ae6ea27dd94\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 12 '\\f'}}, __r = {__words = {3977348508253774129, 876176484, 864691128455135232}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _port = 9009, _path = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _query = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b34007c72f0 \"endpoint=DataPartsExchange%3A%2Fclickhouse%2Ftables%2F01079_parallel_alter_modify_zookeeper_long_test_7ts6hs%2Fconcurrent_alter_mt%2Freplicas%2F4&part=all_47_47_0_55&client_protocol_version=7&compress\"..., __size_ = 206, __cap_ = 9223372036854776016}, __s = {__data_ = \"\\360r|\\000\\064{\\000\\000\\316\\000\\000\\000\\000\\000\\000\\000\\320\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 128 '\\200'}}, __r = {__words = {135463276671728, 206, 9223372036854776016}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _fragment = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}}, method = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x54534f50 <error: Cannot access memory at address 0x54534f50>, __size_ = 0, __cap_ = 288230376151711744}, __s = {__data_ = \"POST\", '\\000' <repeats 18 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 4 '\\004'}}, __r = {__words = {1414745936, 0, 288230376151711744}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, content_encoding = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b4000c19b00 \"Sending request to http://1ae6ea27dd94:9009/?endpoint=DataPartsExchange%3A%2Fclickhouse%2Ftables%2F01079_parallel_alter_modify_zookeeper_long_test_7ts6hs%2Fconcurrent_alter_mt%2Freplicas%2F4&part=all_\"..., __size_ = 251, __cap_ = 256}, __s = {__data_ = \"\\000\\233\\301\\000@{\\000\\000\\373\\000\\000\\000\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {135514820811520, 251, 256}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, session = {__ptr_ = 0x7b4400a3a358, __cntrl_ = 0x7b4400a3a340}, istr = 0x7b4400c56790, impl = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::ReadBuffer*, 0, false>> = {__value_ = 0x7b1c001cbd70}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::ReadBuffer>, 1, true>> = {<std::__1::default_delete<DB::ReadBuffer>> = {<No data fields>}, <No data fields>}, <No data fields>}}, out_stream_callback = {<std::__1::__function::__maybe_derive_from_unary_function<void (std::__1::ostream &)>> = {<std::__1::unary_function<std::__1::ostream&, void>> = {<No data fields>}, <No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void (std::__1::ostream &)>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"\\020>\\224\\035\\000\\000\\000\\000@\\367\\205\\215\\214\\177\\000\", __large = 0x1d943e10 <__sanitizer::theDepot>}, __invoker_ = {__call_ = 0x1315fc60 <std::__1::__function::__policy_invoker<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>::__call_empty(std::__1::__function::__policy_storage const*, std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>}, __policy_ = 0x4c6ff98 <std::__1::__function::__policy::__create_empty()::__policy_>}}, credentials = @0x7f8c8d85c6f0, cookies = {<std::__1::__vector_base<Poco::Net::HTTPCookie, std::__1::allocator<Poco::Net::HTTPCookie> >> = {<std::__1::__vector_base_common<true>> = {<No data fields>}, __begin_ = 0x7b2c00c7fc10, __end_ = 0x7b2c00c7fcb8, __end_cap_ = {<std::__1::__compressed_pair_elem<Poco::Net::HTTPCookie*, 0, false>> = {__value_ = 0x7b2c00c7fcb8}, <std::__1::__compressed_pair_elem<std::__1::allocator<Poco::Net::HTTPCookie>, 1, true>> = {<std::__1::allocator<Poco::Net::HTTPCookie>> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}, http_header_entries = {<std::__1::__vector_base<std::__1::tuple<std::__1::string, std::__1::string>, std::__1::allocator<std::__1::tuple<std::__1::string, std::__1::string> > >> = {<std::__1::__vector_base_common<true>> = {<No data fields>}, __begin_ = 0x0, __end_ = 0x0, __end_cap_ = {<std::__1::__compressed_pair_elem<std::__1::tuple<std::__1::string, std::__1::string>*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::tuple<std::__1::string, std::__1::string> >, 1, true>> = {<std::__1::allocator<std::__1::tuple<std::__1::string, std::__1::string> >> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}, remote_host_filter = {is_allow_by_default = true, primary_hosts = {__table_ = {__bucket_list_ = {__ptr_ = {<std::__1::__compressed_pair_elem<std::__1::__hash_node_base<std::__1::__hash_node<std::__1::string, void*>*>**, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::__bucket_list_deallocator<std::__1::allocator<std::__1::__hash_node_base<std::__1::__hash_node<std::__1::string, void*>*>*> >, 1, false>> = {__value_ = {__data_ = {<std::__1::__compressed_pair_elem<unsigned long, 0, false>> = {__value_ = 0}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::__hash_node_base<std::__1::__hash_node<std::__1::string, void*>*>*>, 1, true>> = {<std::__1::allocator<std::__1::__hash_node_base<std::__1::__hash_node<std::__1::string, void*>*>*>> = {<No data fields>}, <No data fields>}, <No data fields>}}}, <No data fields>}}, __p1_ = {<std::__1::__compressed_pair_elem<std::__1::__hash_node_base<std::__1::__hash_node<std::__1::string, void*>*>, 0, false>> = {__value_ = {__next_ = 0x0}}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::__hash_node<std::__1::string, void*> >, 1, true>> = {<std::__1::allocator<std::__1::__hash_node<std::__1::string, void*> >> = {<No data fields>}, <No data fields>}, <No data fields>}, __p2_ = {<std::__1::__compressed_pair_elem<unsigned long, 0, false>> = {__value_ = 0}, <std::__1::__compressed_pair_elem<std::__1::hash<std::__1::string>, 1, true>> = {<std::__1::hash<std::__1::string>> = {<std::__1::unary_function<std::__1::string, unsigned long>> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, __p3_ = {<std::__1::__compressed_pair_elem<float, 0, false>> = {__value_ = 1}, <std::__1::__compressed_pair_elem<std::__1::equal_to<std::__1::string>, 1, true>> = {<std::__1::equal_to<std::__1::string>> = {<std::__1::binary_function<std::__1::string, std::__1::string, bool>> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}}}, regexp_hosts = {<std::__1::__vector_base<std::__1::string, std::__1::allocator<std::__1::string> >> = {<std::__1::__vector_base_common<true>> = {<No data fields>}, __begin_ = 0x0, __end_ = 0x0, __end_cap_ = {<std::__1::__compressed_pair_elem<std::__1::string*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::string>, 1, true>> = {<std::__1::allocator<std::__1::string>> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}}, next_callback = {<std::__1::__function::__maybe_derive_from_unary_function<void (unsigned long)>> = {<std::__1::unary_function<unsigned long, void>> = {<No data fields>}, <No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void (unsigned long)>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"`|\\v\\000\\b{\\000\\000\\026\\\"\\336\\t\\000\\000\\000\", __large = 0x7b08000b7c60}, __invoker_ = {__call_ = 0x171e7320 <std::__1::__function::__policy_invoker<void (unsigned long)>::__call_impl<std::__1::__function::__default_alloc_func<DB::DataPartsExchange::(anonymous namespace)::ReplicatedFetchReadCallback, void (unsigned long)> >(std::__1::__function::__policy_storage const*, unsigned long)>}, __policy_ = 0x54b2cb0 <std::__1::__function::__policy::__choose_policy<std::__1::__function::__default_alloc_func<DB::DataPartsExchange::(anonymous namespace)::ReplicatedFetchReadCallback, void (unsigned long)> >(std::__1::integral_constant<bool, true>)::__policy_>}}, buffer_size = 1048576, use_external_buffer = false, offset_from_begin_pos = 855, read_range = {begin = 0, end = {<std::__1::__optional_move_assign_base<unsigned long, true>> = {<std::__1::__optional_copy_assign_base<unsigned long, true>> = {<std::__1::__optional_move_base<unsigned long, true>> = {<std::__1::__optional_copy_base<unsigned long, true>> = {<std::__1::__optional_storage_base<unsigned long, false>> = {<std::__1::__optional_destruct_base<unsigned long, true>> = {{__null_state_ = 0 '\\000', __val_ = 140241646515968}, __engaged_ = false}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <std::__1::__sfinae_ctor_base<true, true>> = {<No data fields>}, <std::__1::__sfinae_assign_base<true, true>> = {<No data fields>}, <No data fields>}}, exception = {__ptr_ = 0x0}, retry_with_range_header = false, saved_uri_redirect = {<std::__1::__optional_move_assign_base<Poco::URI, false>> = {<std::__1::__optional_copy_assign_base<Poco::URI, false>> = {<std::__1::__optional_move_base<Poco::URI, false>> = {<std::__1::__optional_copy_base<Poco::URI, false>> = {<std::__1::__optional_storage_base<Poco::URI, false>> = {<std::__1::__optional_destruct_base<Poco::URI, false>> = {{__null_state_ = 0 '\\000', __val_ = {static RESERVED_PATH = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x233f <error: Cannot access memory at address 0x233f>, __size_ = 0, __cap_ = 144115188075855872}, __s = {__data_ = \"?#\", '\\000' <repeats 20 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 2 '\\002'}}, __r = {__words = {9023, 0, 144115188075855872}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_QUERY = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x402b3b3a2f233f <error: Cannot access memory at address 0x402b3b3a2f233f>, __size_ = 0, __cap_ = 504403158265495552}, __s = {__data_ = \"?#/:;+@\", '\\000' <repeats 15 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 7 '\\a'}}, __r = {__words = {18061931888714559, 0, 504403158265495552}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_QUERY_PARAM = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x26402b3b3a2f233f <error: Cannot access memory at address 0x26402b3b3a2f233f>, __size_ = 61, __cap_ = 648518346341351424}, __s = {__data_ = \"?#/:;+@&=\", '\\000' <repeats 13 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 9 '\\t'}}, __r = {__words = {2756250505329976127, 61, 648518346341351424}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static RESERVED_FRAGMENT = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, static ILLEGAL = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x225c7c7d7b3e3c25 <error: Cannot access memory at address 0x225c7c7d7b3e3c25>, __size_ = 2605658008086208606, __cap_ = 1369094286726748972}, __s = {__data_ = \"%<>{}|\\\\\\\"^`!*'()$,[]\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 19 '\\023'}}, __r = {__words = {2475990773527362597, 2605658008086208606, 1369094286726748972}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _scheme = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x9de2200 <__tsan_atomic64_fetch_add+48> \"\\017\\205#\\002\", __size_ = 140235496718336, __cap_ = 165154519}, __s = {__data_ = \"\\000\\\"\\336\\t\\000\\000\\000\\000\\000\\200\\367\\036\\213\\177\\000\\000\\327\\016\\330\\t\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {165552640, 140235496718336, 165154519}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _userInfo = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7f8c8d85f740 \"\\335\\v\\356\\005\", __size_ = 165617426, __cap_ = 140241646516032}, __s = {__data_ = \"@\\367\\205\\215\\214\\177\\000\\000\\022\\037\\337\\t\\000\\000\\000\\000@\\367\\205\\215\\214\\177\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {140241646516032, 165617426, 140241646516032}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _host = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7f8b1ef922f0 \"\", __size_ = 140235496825520, __cap_ = 165077283}, __s = {__data_ = \"\\360\\\"\\371\\036\\213\\177\\000\\000\\260\\\"\\371\\036\\213\\177\\000\\000#\\341\\326\\t\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {140235496825584, 140235496825520, 165077283}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _port = 8880, _path = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x9dfcaed <__tsan::DDMutexInit(__tsan::ThreadState*, unsigned long, __tsan::SyncVar*)+93> \"H\\213C\\030H\\301\\340\\060H\\v\\003H\\271\\377\\377\\377\\377\\377\\377\\377\\017H!\\301H\\211KPH\\203\\304\\060[\\303\\314UAWAVAUATSPA\\211\\314H\\211\\323H\\211\\365I\\211\\377I\\275\", __size_ = 140235496718336, __cap_ = 495073128}, __s = {__data_ = \"\\355\\312\\337\\t\\000\\000\\000\\000\\000\\200\\367\\036\\213\\177\\000\\000h7\\202\\035\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {165661421, 140235496718336, 495073128}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _query = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7f8c8d85c748 \"\\377M\\n\", __size_ = 89307656, __cap_ = 24}, __s = {__data_ = \"H\\307\\205\\215\\214\\177\\000\\000\\b\\272R\\005\\000\\000\\000\\000\\030\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {140241646503752, 89307656, 24}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, _fragment = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x17b847a1 <Coordination::ZooKeeperRequest::~ZooKeeperRequest()+1089> \"H\\201\\304\\310\", __size_ = 140241646516032, __cap_ = 165703007}, __s = {__data_ = \"\\241G\\270\\027\\000\\000\\000\\000@\\367\\205\\215\\214\\177\\000\\000_m\\340\\t\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {397952929, 140241646516032, 165703007}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}}}, __engaged_ = false}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <std::__1::__sfinae_ctor_base<true, true>> = {<No data fields>}, <std::__1::__sfinae_assign_base<true, true>> = {<No data fields>}, <No data fields>}, settings = {local_fs_method = DB::LocalFSReadMethod::pread, remote_fs_method = DB::RemoteFSReadMethod::read, local_fs_buffer_size = 1048576, remote_fs_buffer_size = 1048576, local_fs_prefetch = false, remote_fs_prefetch = false, direct_io_threshold = 0, mmap_threshold = 0, mmap_cache = 0x0, priority = 0, remote_fs_read_max_backoff_ms = 10000, remote_fs_read_backoff_max_tries = 4, remote_read_min_bytes_for_seek = 1048576, http_max_tries = 1, http_retry_initial_backoff_ms = 100, http_retry_max_backoff_ms = 1600, must_read_until_position = false}, log = 0x7b1000331c80}, <No data fields>}\r\n2021-12-14 22:12:49 part_type = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x746361706d6f43 <error: Cannot access memory at address 0x746361706d6f43>, __size_ = 24, __cap_ = 504403158265495600}, __s = {__data_ = \"Compact\\000\\030\\000\\000\\000\\000\\000\\000\\000\\060\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 7 '\\a'}}, __r = {__words = {32760367447633731, 24, 504403158265495600}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 part_uuid = {t = {items = {0, 0}}}\r\n2021-12-14 22:12:49 remote_fs_metadata = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 storage_id = {database_name = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7374375f74736574 <error: Cannot access memory at address 0x7374375f74736574>, __size_ = 7563318, __cap_ = 792633534417207296}, __s = {__data_ = \"test_7ts6hs\", '\\000' <repeats 11 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 11 '\\v'}}, __r = {__words = {8319335294775289204, 7563318, 792633534417207296}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, table_name = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x65727275636e6f63 <error: Cannot access memory at address 0x65727275636e6f63>, __size_ = 8243122701810496622, __cap_ = 1513209695441284447}, __s = {__data_ = \"concurrent_alter_mt_3\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 21 '\\025'}}, __r = {__words = {7310030993680658275, 8243122701810496622, 1513209695441284447}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, uuid = {t = {items = {961198135606526439, 10184570172461298151}}}}\r\n2021-12-14 22:12:49 new_part_path = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b18008a1260 \"/var/lib/clickhouse/store/0d5/0d56dc9c-9798-41e7-8d56-dc9c979831e7/all_47_47_0_55/\", __size_ = 82, __cap_ = 9223372036854775904}, __s = {__data_ = \"`\\022\\212\\000\\030{\\000\\000R\\000\\000\\000\\000\\000\\000\\000`\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 128 '\\200'}}, __r = {__words = {135343018480224, 82, 9223372036854775904}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:49 entry = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::BackgroundProcessListEntry<DB::ReplicatedFetchListElement, DB::ReplicatedFetchInfo>*, 0, false>> = {__value_ = 0x7b08000b7c60}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::BackgroundProcessListEntry<DB::ReplicatedFetchListElement, DB::ReplicatedFetchInfo> >, 1, true>> = {<std::__1::default_delete<DB::BackgroundProcessListEntry<DB::ReplicatedFetchListElement, DB::ReplicatedFetchInfo> >> = {<No data fields>}, <No data fields>}, <No data fields>}}\r\n2021-12-14 22:12:49 checksums = {files = {__tree_ = {__begin_node_ = 0x7b1c006595e0, __pair1_ = {<std::__1::__compressed_pair_elem<std::__1::__tree_end_node<std::__1::__tree_node_base<void*>*>, 0, false>> = {__value_ = {__left_ = 0x7b1c00110dd0}}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<std::__1::string, DB::MergeTreeDataPartChecksum>, void*> >, 1, true>> = {<std::__1::allocator<std::__1::__tree_node<std::__1::__value_type<std::__1::string, DB::MergeTreeDataPartChecksum>, void*> >> = {<No data fields>}, <No data fields>}, <No data fields>}, __pair3_ = {<std::__1::__compressed_pair_elem<unsigned long, 0, false>> = {__value_ = 3}, <std::__1::__compressed_pair_elem<std::__1::__map_value_compare<std::__1::string, std::__1::__value_type<std::__1::string, DB::MergeTreeDataPartChecksum>, std::__1::less<std::__1::string>, true>, 1, true>> = {<std::__1::__map_value_compare<std::__1::string, std::__1::__value_type<std::__1::string, DB::MergeTreeDataPartChecksum>, std::__1::less<std::__1::string>, true>> = {<std::__1::less<std::__1::string>> = {<std::__1::binary_function<std::__1::string, std::__1::string, bool>> = {<No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}}}}\r\n2021-12-14 22:12:49 reservation = <optimized out>\r\n2021-12-14 22:12:49 sum_files_size = <optimized out>\r\n2021-12-14 22:12:49 sync = <optimized out>\r\n2021-12-14 22:12:49 projections = 0\r\n2021-12-14 22:12:49 server_protocol_version = <optimized out>\r\n2021-12-14 22:12:49 #29 0x000000001704195c in DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19::operator()() const (this=0x7b4000c19500) at ../src/Storages/StorageReplicatedMergeTree.cpp:3881\r\n2021-12-14 22:12:49 No locals.\r\n2021-12-14 22:12:49 #30 std::__1::__invoke<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19&> (__f=...) at ../contrib/libcxx/include/type_traits:3676\r\n2021-12-14 22:12:49 No locals.\r\n2021-12-14 22:12:49 #31 std::__1::__invoke_void_return_wrapper<std::__1::shared_ptr<DB::IMergeTreeDataPart> >::__call<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19&>(DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19&) (__args=...) at ../contrib/libcxx/include/__functional_base:317\r\n2021-12-14 22:12:49 No locals.\r\n2021-12-14 22:12:49 #32 std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19, std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::operator()() (this=0x7b4000c19500) at ../contrib/libcxx/include/functional:1608\r\n2021-12-14 22:12:49 No locals.\r\n2021-12-14 22:12:49 #33 std::__1::__function::__policy_invoker<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19, std::__1::shared_ptr<DB::IMergeTreeDataPart> ()> >(std::__1::__function::__policy_storage const*) (__buf=<optimized out>) at ../contrib/libcxx/include/functional:2089\r\n2021-12-14 22:12:49 __f = 0x7b4000c19500\r\n2021-12-14 22:12:49 #34 0x0000000016fc5b05 in std::__1::__function::__policy_func<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::operator()() const (this=0x7f8c8d85cfa8) at ../contrib/libcxx/include/functional:2221\r\n2021-12-14 22:12:49 No locals.\r\n2021-12-14 22:12:49 #35 std::__1::function<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::operator()() const (this=0x7f8c8d85cfa8) at ../contrib/libcxx/include/functional:2560\r\n2021-12-14 22:12:49 No locals.\r\n2021-12-14 22:12:49 #36 DB::StorageReplicatedMergeTree::fetchPart (this=<optimized out>, this@entry=0x7b8400260c00, part_name=..., metadata_snapshot=..., source_replica_path=..., to_detached=<optimized out>, quorum=quorum@entry=0, zookeeper_=...) at ../src/Storages/StorageReplicatedMergeTree.cpp:3902\r\n2021-12-14 22:12:50 zookeeper = {__ptr_ = 0x7b3801934418, __cntrl_ = 0x7b3801934400}\r\n2021-12-14 22:12:50 part_info = {partition_id = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x6c6c61 \"F\\326\\025\", __size_ = 0, __cap_ = 216172782113783808}, __s = {__data_ = \"all\", '\\000' <repeats 19 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 3 '\\003'}}, __r = {__words = {7105633, 0, 216172782113783808}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, min_block = 47, max_block = 47, level = 0, mutation = 55, use_leagcy_max_level = false, static MAX_LEVEL = 999999999, static MAX_BLOCK_NUMBER = 999999999, static LEGACY_MAX_LEVEL = 4294967295}\r\n2021-12-14 22:12:50 scope_exit3796 = {static is_nullable = <optimized out>, function = {this = 0x7b8400260c00, part_name = @0x7f8c8d85d188}}\r\n2021-12-14 22:12:50 table_lock_holder = {__ptr_ = 0x7b18009c4878, __cntrl_ = 0x7b18009c4860}\r\n2021-12-14 22:12:50 stopwatch = {start_ns = 8143859940835, stop_ns = 0, clock_type = 1, is_running = true}\r\n2021-12-14 22:12:50 part = {__ptr_ = 0x0, __cntrl_ = 0x0}\r\n2021-12-14 22:12:50 replaced_parts = {<std::__1::__vector_base<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> > >> = {<std::__1::__vector_base_common<true>> = {<No data fields>}, __begin_ = 0x0, __end_ = 0x0, __end_cap_ = {<std::__1::__compressed_pair_elem<std::__1::shared_ptr<DB::IMergeTreeDataPart const>*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> >, 1, true>> = {<std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> >> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}\r\n2021-12-14 22:12:50 write_part_log = {this = 0x7b8400260c00, stopwatch = @0x7f8c8d85cb48, part_name = @0x7f8c8d85d188, part = @0x7f8c8d85cdb0, replaced_parts = @0x7f8c8d85cdd0}\r\n2021-12-14 22:12:50 part_to_clone = {__ptr_ = 0x0, __cntrl_ = 0x0}\r\n2021-12-14 22:12:50 address = {host = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x3732616536656131 <error: Cannot access memory at address 0x3732616536656131>, __size_ = 876176484, __cap_ = 864691128455135232}, __s = {__data_ = \"1ae6ea27dd94\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 12 '\\f'}}, __r = {__words = {3977348508253774129, 876176484, 864691128455135232}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, replication_port = 9009, queries_port = 9000, database = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7374375f74736574 <error: Cannot access memory at address 0x7374375f74736574>, __size_ = 7563318, __cap_ = 792633534417207296}, __s = {__data_ = \"test_7ts6hs\", '\\000' <repeats 11 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 11 '\\v'}}, __r = {__words = {8319335294775289204, 7563318, 792633534417207296}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, table = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x65727275636e6f63 <error: Cannot access memory at address 0x65727275636e6f63>, __size_ = 8243122701810496622, __cap_ = 1513209699736251743}, __s = {__data_ = \"concurrent_alter_mt_4\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 21 '\\025'}}, __r = {__words = {7310030993680658275, 8243122701810496622, 1513209699736251743}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, scheme = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x70747468 <error: Cannot access memory at address 0x70747468>, __size_ = 0, __cap_ = 288230376151711744}, __s = {__data_ = \"http\", '\\000' <repeats 18 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 4 '\\004'}}, __r = {__words = {1886680168, 0, 288230376151711744}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}}\r\n2021-12-14 22:12:50 timeouts = {connection_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 1000000}, send_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 60000000}, receive_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 60000000}, tcp_keep_alive_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 290000000}, http_keep_alive_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 3000000}, secure_connection_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 1000000}, hedged_connection_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 60000000}, receive_data_timeout = {static MILLISECONDS = 1000, static SECONDS = 1000000, static MINUTES = 60000000, static HOURS = 3600000000, static DAYS = 86400000000, _span = 60000000}}\r\n2021-12-14 22:12:50 tagger_ptr = {<std::__1::__optional_move_assign_base<DB::CurrentlySubmergingEmergingTagger, false>> = {<std::__1::__optional_copy_assign_base<DB::CurrentlySubmergingEmergingTagger, false>> = {<std::__1::__optional_move_base<DB::CurrentlySubmergingEmergingTagger, false>> = {<std::__1::__optional_copy_base<DB::CurrentlySubmergingEmergingTagger, false>> = {<std::__1::__optional_storage_base<DB::CurrentlySubmergingEmergingTagger, false>> = {<std::__1::__optional_destruct_base<DB::CurrentlySubmergingEmergingTagger, false>> = {{__null_state_ = 0 '\\000', __val_ = {storage = @0x6c6c00, emerging_part_name = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 216172782113783808, __cap_ = 47}, __s = {__data_ = '\\000' <repeats 15 times>, \"\\003/\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 216172782113783808, 47}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, submerging_parts = {<std::__1::__vector_base<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> > >> = {<std::__1::__vector_base_common<true>> = {<No data fields>}, __begin_ = 0x2f, __end_ = 0x7b3800000000, __end_cap_ = {<std::__1::__compressed_pair_elem<std::__1::shared_ptr<DB::IMergeTreeDataPart const>*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> >, 1, true>> = {<std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const> >> = {<No data fields>}, <No data fields>}, <No data fields>}}, <No data fields>}, log = 0x173b4200 <DB::MergeTreePartInfo::tryParsePartName(std::__1::basic_string_view<char, std::__1::char_traits<char> >, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>)+864>}}, __engaged_ = false}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <No data fields>}, <std::__1::__sfinae_ctor_base<true, true>> = {<No data fields>}, <std::__1::__sfinae_assign_base<false, false>> = {<No data fields>}, <No data fields>}\r\n2021-12-14 22:12:50 get_part = {<std::__1::__function::__maybe_derive_from_unary_function<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>> = {<No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"\\000\\225\\301\\000@{\\000\\000\\001\\000\\000\\000\\000\\000\\000\", __large = 0x7b4000c19500}, __invoker_ = {__call_ = 0x170415a0 <std::__1::__function::__policy_invoker<std::__1::shared_ptr<DB::IMergeTreeDataPart> ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19, std::__1::shared_ptr<DB::IMergeTreeDataPart> ()> >(std::__1::__function::__policy_storage const*)>}, __policy_ = 0x54a4618 <std::__1::__function::__policy::__choose_policy<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::fetchPart(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, unsigned long, std::__1::shared_ptr<zkutil::ZooKeeper>)::$_19, std::__1::shared_ptr<DB::IMergeTreeDataPart> ()> >(std::__1::integral_constant<bool, false>)::__policy_>}}\r\n2021-12-14 22:12:50 interserver_scheme = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x70747468 <error: Cannot access memory at address 0x70747468>, __size_ = <optimized out>, __cap_ = 288230376151711748}, __s = {__data_ = {104 'h', 116 't', 116 't', 112 'p', 0 '\\000', 0 '\\000', 0 '\\000', 0 '\\000', <optimized out>, <optimized out>, <optimized out>, <optimized out>, <optimized out>, <optimized out>, <optimized out>, <optimized out>, 4 '\\004', 0 '\\000', 0 '\\000', 0 '\\000', 0 '\\000', 0 '\\000', 0 '\\000'}, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 4 '\\004'}}, __r = {__words = {1886680168, <optimized out>, 288230376151711748}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:50 credentials = {__ptr_ = <optimized out>, __cntrl_ = 0x7b080000f020}\r\n2021-12-14 22:12:50 e = <optimized out>\r\n2021-12-14 22:12:50 #37 0x0000000016fba61d in DB::StorageReplicatedMergeTree::executeFetch (this=<optimized out>, entry=...) at ../src/Storages/StorageReplicatedMergeTree.cpp:1734\r\n2021-12-14 22:12:50 part_name = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x345f37345f6c6c61 <error: Cannot access memory at address 0x345f37345f6c6c61>, __size_ = 58503346544439, __cap_ = 1008806316530991104}, __s = {__data_ = \"all_47_47_0_55\\000\\000\\000\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 14 '\\016'}}, __r = {__words = {3773795710838533217, 58503346544439, 1008806316530991104}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:50 e = <optimized out>\r\n2021-12-14 22:12:50 replica = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x34 <error: Cannot access memory at address 0x34>, __size_ = 0, __cap_ = 72057594037927936}, __s = {__data_ = \"4\", '\\000' <repeats 21 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 1 '\\001'}}, __r = {__words = {52, 0, 72057594037927936}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}\r\n2021-12-14 22:12:50 storage_settings_ptr = {__ptr_ = 0x7b68001bfc00, __cntrl_ = 0x7b08004a0080}\r\n2021-12-14 22:12:50 metadata_snapshot = {__ptr_ = 0x7b7000271800, __cntrl_ = 0x7b0800423d20}\r\n2021-12-14 22:12:50 #38 0x000000001746c54a in DB::ReplicatedMergeMutateTaskBase::executeImpl()::$_1::operator()() const (this=<optimized out>) at ../src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:145\r\n2021-12-14 22:12:50 No locals.\r\n2021-12-14 22:12:50 #39 DB::ReplicatedMergeMutateTaskBase::executeImpl (this=this@entry=0x7b4801aa2a00) at ../src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:168\r\n2021-12-14 22:12:51 res = false\r\n2021-12-14 22:12:51 switcher = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::MemoryTrackerThreadSwitcher*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::MemoryTrackerThreadSwitcher>, 1, true>> = {<std::__1::default_delete<DB::MemoryTrackerThreadSwitcher>> = {<No data fields>}, <No data fields>}, <No data fields>}}\r\n2021-12-14 22:12:51 remove_processed_entry = {this = 0x7b4801aa2a00}\r\n2021-12-14 22:12:51 execute_fetch = {this = 0x7b4801aa2a00, remove_processed_entry = @0x7f8c8d85d4c0}\r\n2021-12-14 22:12:51 #40 0x000000001746b510 in DB::ReplicatedMergeMutateTaskBase::executeStep (this=0x7b4801aa2a00) at ../src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:41\r\n2021-12-14 22:12:51 e = <optimized out>\r\n2021-12-14 22:12:51 saved_exception = {__ptr_ = 0x0}\r\n2021-12-14 22:12:51 #41 0x0000000017277f8b in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine (this=this@entry=0x7b4c00001340, item=...) at ../src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:87\r\n2021-12-14 22:12:51 erase_from_active = {this = 0x7b4c00001340, item = {__ptr_ = 0x7b280042aff8, __cntrl_ = 0x7b280042afe0}}\r\n2021-12-14 22:12:51 need_execute_again = false\r\n2021-12-14 22:12:51 #42 0x0000000017278bc6 in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::threadFunction (this=0x0) at ../src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:176\r\n2021-12-14 22:12:51 item = {__ptr_ = 0x0, __cntrl_ = 0x0}\r\n2021-12-14 22:12:51 #43 0x000000001727dc42 in DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}::operator()() const (this=0x7f8c8d85d8a8) at ../src/Storages/MergeTree/MergeTreeBackgroundExecutor.h:184\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #44 std::__1::__invoke<DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}&> (__f=...) at ../contrib/libcxx/include/type_traits:3676\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #45 std::__1::__invoke_void_return_wrapper<void>::__call<DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}&>(DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}&) (__args=...) at ../contrib/libcxx/include/__functional_base:348\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #46 std::__1::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}, void ()>::operator()() (this=0x7f8c8d85d8a8) at ../contrib/libcxx/include/functional:1608\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #47 std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f8c8d85d8a8) at ../contrib/libcxx/include/functional:2089\r\n2021-12-14 22:12:51 __f = 0x7f8c8d85d8a8\r\n2021-12-14 22:12:51 #48 0x0000000009e7cf58 in std::__1::__function::__policy_func<void ()>::operator()() const (this=0x7f8c8d85d8a8) at ../contrib/libcxx/include/functional:2221\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #49 std::__1::function<void ()>::operator()() const (this=0x7f8c8d85d8a8) at ../contrib/libcxx/include/functional:2560\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #50 ThreadPoolImpl<ThreadFromGlobalPool>::worker (this=this@entry=0x7b4c00001410, thread_it=thread_it@entry=...) at ../src/Common/ThreadPool.cpp:274\r\n2021-12-14 22:12:51 metric_active_threads = {what = <optimized out>, amount = 1}\r\n2021-12-14 22:12:51 job = {<std::__1::__function::__maybe_derive_from_unary_function<void ()>> = {<No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void ()>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"@\\023\\000\\000L{\\000\\000\\370\\277\\325t\\374\\177\\000\", __large = 0x7b4c00001340}, __invoker_ = {__call_ = 0x1727dc20 <std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*)>}, __policy_ = 0x54b9440 <std::__1::__function::__policy::__choose_policy<std::__1::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::MergeTreeBackgroundExecutor(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long, unsigned long, unsigned long)::{lambda()#1}, void ()> >(std::__1::integral_constant<bool, true>)::__policy_>}}\r\n2021-12-14 22:12:51 need_shutdown = <optimized out>\r\n2021-12-14 22:12:51 metric_all_threads = {what = <optimized out>, amount = 1}\r\n2021-12-14 22:12:51 #51 0x0000000009e7f519 in ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}::operator()() const (this=<optimized out>) at ../src/Common/ThreadPool.cpp:139\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #52 std::__1::__invoke_constexpr<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&> (__f=...) at ../contrib/libcxx/include/type_traits:3682\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #53 std::__1::__apply_tuple_impl<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&, std::__1::tuple<>&>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&, std::__1::tuple<>&, std::__1::__tuple_indices<>) (__f=..., __t=...) at ../contrib/libcxx/include/tuple:1415\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #54 std::__1::apply<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&, std::__1::tuple<>&>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&, std::__1::tuple<>&) (__f=..., __t=...) at ../contrib/libcxx/include/tuple:1424\r\n2021-12-14 22:12:51 No locals.\r\n2021-12-14 22:12:51 #55 ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}::operator()() (this=0x7b10000166c0) at ../src/Common/ThreadPool.h:188\r\n2021-12-14 22:12:52 event = {__ptr_ = 0x7b2000003e18, __cntrl_ = 0x7b2000003e00}\r\n2021-12-14 22:12:52 scope_exit176 = {static is_nullable = false, function = {event = @0x7f8c8d85db68}}\r\n2021-12-14 22:12:52 thread_status = {<boost::noncopyable_::noncopyable> = {<boost::noncopyable_::base_token> = {<No data fields>}, <No data fields>}, thread_id = 547, os_thread_priority = 0, performance_counters = {counters = 0x7b7000130000, counters_holder = {__ptr_ = {<std::__1::__compressed_pair_elem<std::__1::atomic<unsigned long>*, 0, false>> = {__value_ = 0x7b7000130000}, <std::__1::__compressed_pair_elem<std::__1::default_delete<std::__1::atomic<unsigned long> []>, 1, true>> = {<std::__1::default_delete<std::__1::atomic<unsigned long> []>> = {<No data fields>}, <No data fields>}, <No data fields>}}, parent = 0x20c60f28 <ProfileEvents::global_counters>, level = VariableContext::Thread, static num_counters = 235}, memory_tracker = {amount = {<std::__1::__atomic_base<long, true>> = {<std::__1::__atomic_base<long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<long>> = {__a_value = 4214987}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, peak = {<std::__1::__atomic_base<long, true>> = {<std::__1::__atomic_base<long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<long>> = {__a_value = 17915967}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, hard_limit = {<std::__1::__atomic_base<long, true>> = {<std::__1::__atomic_base<long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, profiler_limit = {<std::__1::__atomic_base<long, true>> = {<std::__1::__atomic_base<long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, profiler_step = 0, fault_probability = 0, sample_probability = 0, parent = {<std::__1::__atomic_base<MemoryTracker*, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<MemoryTracker*>> = {__a_value = 0x20c611a8 <total_memory_tracker>}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, metric = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 75}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, description_ptr = {<std::__1::__atomic_base<char const*, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<char const*>> = {__a_value = 0x4c2ee6c \"(for thread)\"}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, static USAGE_EVENT_NAME = <optimized out>, level = VariableContext::Thread}, untracked_memory = 2097438, untracked_memory_limit = 4194304, progress_in = {read_rows = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, read_bytes = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, read_raw_bytes = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, total_rows_to_read = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, total_raw_bytes_to_read = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, written_rows = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, written_bytes = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}}, progress_out = {read_rows = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, read_bytes = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, read_raw_bytes = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, total_rows_to_read = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, total_raw_bytes_to_read = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, written_rows = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, written_bytes = {<std::__1::__atomic_base<unsigned long, true>> = {<std::__1::__atomic_base<unsigned long, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<unsigned long>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}}, deleter = {<std::__1::__function::__maybe_derive_from_unary_function<void ()>> = {<No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void ()>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"\\000\\000\\000\\000\\000\\000\\000\\000\\250\\332\\205\\215\\214\\177\\000\", __large = 0x0}, __invoker_ = {__call_ = 0x9e69780 <std::__1::__function::__policy_invoker<void ()>::__call_empty(std::__1::__function::__policy_storage const*)>}, __policy_ = 0x4c6ff98 <std::__1::__function::__policy::__create_empty()::__policy_>}}, thread_trace_context = {trace_id = {t = {items = {0, 0}}}, span_id = 0, tracestate = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x0, __size_ = 0, __cap_ = 0}, __s = {__data_ = '\\000' <repeats 22 times>, {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 0 '\\000'}}, __r = {__words = {0, 0, 0}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, trace_flags = 0 '\\000'}, thread_group = {__ptr_ = 0x0, __cntrl_ = 0x0}, thread_state = {<std::__1::__atomic_base<int, true>> = {<std::__1::__atomic_base<int, false>> = {__a_ = {<std::__1::__cxx_atomic_base_impl<int>> = {__a_value = 0}, <No data fields>}, static is_always_lock_free = <optimized out>}, <No data fields>}, <No data fields>}, global_context = {__ptr_ = 0x0, __cntrl_ = 0x0}, query_context = {__ptr_ = 0x0, __cntrl_ = 0x0}, query_id = {<std::__1::__basic_string_common<true>> = {<No data fields>}, static __short_mask = 128, static __long_mask = 9223372036854775808, __r_ = {<std::__1::__compressed_pair_elem<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__rep, 0, false>> = {__value_ = {{__l = {__data_ = 0x7b1000414040 \"\", __size_ = 0, __cap_ = 9223372036854775872}, __s = {__data_ = \"@@A\\000\\020{\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000@\\000\\000\\000\\000\\000\", {<std::__1::__padding<char, 1>> = {<No data fields>}, __size_ = 128 '\\200'}}, __r = {__words = {135308653969472, 0, 9223372036854775872}}}}}, <std::__1::__compressed_pair_elem<std::__1::allocator<char>, 1, true>> = {<std::__1::allocator<char>> = {<No data fields>}, <No data fields>}, <No data fields>}, static npos = 18446744073709551615}, logs_queue_ptr = {__ptr_ = 0x0, __cntrl_ = 0x0}, profile_queue_ptr = {__ptr_ = 0x0, __cntrl_ = 0x0}, performance_counters_finalized = false, query_start_time_nanoseconds = 0, query_start_time_microseconds = 0, query_start_time = 0, queries_started = 0, query_profiler_real = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::QueryProfilerReal*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::QueryProfilerReal>, 1, true>> = {<std::__1::default_delete<DB::QueryProfilerReal>> = {<No data fields>}, <No data fields>}, <No data fields>}}, query_profiler_cpu = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::QueryProfilerCPU*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::QueryProfilerCPU>, 1, true>> = {<std::__1::default_delete<DB::QueryProfilerCPU>> = {<No data fields>}, <No data fields>}, <No data fields>}}, log = 0x7b100000a440, last_rusage = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::RUsageCounters*, 0, false>> = {__value_ = 0x7b0c0002b350}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::RUsageCounters>, 1, true>> = {<std::__1::default_delete<DB::RUsageCounters>> = {<No data fields>}, <No data fields>}, <No data fields>}}, taskstats = {__ptr_ = {<std::__1::__compressed_pair_elem<DB::TasksStatsCounters*, 0, false>> = {__value_ = 0x0}, <std::__1::__compressed_pair_elem<std::__1::default_delete<DB::TasksStatsCounters>, 1, true>> = {<std::__1::default_delete<DB::TasksStatsCounters>> = {<No data fields>}, <No data fields>}, <No data fields>}}, fatal_error_callback = {<std::__1::__function::__maybe_derive_from_unary_function<void ()>> = {<No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void ()>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"\\000\\000\\000\\000\\000\\000\\000\\000\\250\\036\\222\\034\\215\\177\\000\", __large = 0x0}, __invoker_ = {__call_ = 0x9e69780 <std::__1::__function::__policy_invoker<void ()>::__call_empty(std::__1::__function::__policy_storage const*)>}, __policy_ = 0x4c6ff98 <std::__1::__function::__policy::__create_empty()::__policy_>}}, query_profiler_enabled = true}\r\n2021-12-14 22:12:52 function = {this = 0x7b4c00001410, it = {__ptr_ = 0x7b0c00075e40}}\r\n2021-12-14 22:12:52 arguments = <optimized out>\r\n2021-12-14 22:12:52 #56 0x0000000009e7f3c2 in std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}&> (__f=...) at ../contrib/libcxx/include/type_traits:3676\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #57 std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}&>(ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}&) (__args=...) at ../contrib/libcxx/include/__functional_base:348\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #58 std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}, void ()>::operator()() (this=0x0) at ../contrib/libcxx/include/functional:1608\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #59 std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*) (__buf=0x7f8c8d85dc98) at ../contrib/libcxx/include/functional:2089\r\n2021-12-14 22:12:52 __f = 0x0\r\n2021-12-14 22:12:52 #60 0x0000000009e7a5ce in std::__1::__function::__policy_func<void ()>::operator()() const (this=0x7f8c8d85dc98) at ../contrib/libcxx/include/functional:2221\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #61 std::__1::function<void ()>::operator()() const (this=0x7f8c8d85dc98) at ../contrib/libcxx/include/functional:2560\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #62 ThreadPoolImpl<std::__1::thread>::worker (this=this@entry=0x7b3c00007800, thread_it=...) at ../src/Common/ThreadPool.cpp:274\r\n2021-12-14 22:12:52 metric_active_threads = {what = <optimized out>, amount = 1}\r\n2021-12-14 22:12:52 job = {<std::__1::__function::__maybe_derive_from_unary_function<void ()>> = {<No data fields>}, <std::__1::__function::__maybe_derive_from_binary_function<void ()>> = {<No data fields>}, __f_ = {__buf_ = {__small = \"\\300f\\001\\000\\020{\\000\\000\\340\\373\\000\\000\\b{\\000\", __large = 0x7b10000166c0}, __invoker_ = {__call_ = 0x9e7f3a0 <std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}, void ()> >(std::__1::__function::__policy_storage const*)>}, __policy_ = 0x4c72880 <std::__1::__function::__policy::__choose_policy<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}&&)::{lambda()#1}, void ()> >(std::__1::integral_constant<bool, false>)::__policy_>}}\r\n2021-12-14 22:12:52 need_shutdown = <optimized out>\r\n2021-12-14 22:12:52 metric_all_threads = {what = <optimized out>, amount = 1}\r\n2021-12-14 22:12:52 #63 0x0000000009e7dd91 in ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}::operator()() const (this=0x7b080000fb88) at ../src/Common/ThreadPool.cpp:139\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #64 std::__1::__invoke<ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}> (__f=...) at ../contrib/libcxx/include/type_traits:3676\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #65 std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/libcxx/include/thread:280\r\n2021-12-14 22:12:52 No locals.\r\n2021-12-14 22:12:52 #66 std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}> >(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#2}>) (__vp=0x7b080000fb80) at ../contrib/libcxx/include/thread:291\r\n2021-12-14 22:12:52 __p = {__ptr_ = {<std::__1::__compressed_pair_elem<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, (lambda at ../src/Common/ThreadPool.cpp:139:42)> *, 0, false>> = {__value_ = 0x7b080000fb80}, <std::__1::__compressed_pair_elem<std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, (lambda at ../src/Common/ThreadPool.cpp:139:42)> >, 1, true>> = {<std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, (lambda at ../src/Common/ThreadPool.cpp:139:42)> >> = {<No data fields>}, <No data fields>}, <No data fields>}}\r\n2021-12-14 22:12:52 #67 0x0000000009d94e3d in __tsan_thread_start_func ()\r\n2021-12-14 22:12:52 No symbol table info available.\r\n2021-12-14 22:12:52 #68 0x00007f8d20bb3609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0\r\n2021-12-14 22:12:52 No symbol table info available.\r\n2021-12-14 22:12:52 #69 0x00007f8d20ada293 in clone () from /lib/x86_64-linux-gnu/libc.so.6\r\n2021-12-14 22:12:52 No symbol table info available.\r\n2021-12-14 22:12:52 No symbol table info available.\r\n2021-12-14 22:12:52 [Inferior 1 (process 515) detached]\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/32773 | https://github.com/ClickHouse/ClickHouse/pull/38977 | 3e648acb3b2de7ab53940e25d63acd8c7118e737 | bd97233a4f64f2148c44857aaea3115a6debe668 | "2021-12-14T20:18:57Z" | c++ | "2022-07-08T08:04:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,750 | ["src/Server/HTTPHandler.cpp", "tests/queries/0_stateless/02152_http_external_tables_memory_tracking.reference", "tests/queries/0_stateless/02152_http_external_tables_memory_tracking.sh"] | After update to v21.8.11 from v21.3.17 memory usage constantly increasing. | Mark_cache_size is set to default 5GB.
Dictionaries are not used.
The uncompressed cache is disabled.
Update to v21.8.12.29 didn't help.

```
# ch_query () {
> clickhouse-client -m -n -q \
> "
> SELECT 'Allocated by dict' as alloc, formatReadableSize(sum(bytes_allocated)) FROM system.dictionaries;
>
> SELECT
> database,
> name,
> formatReadableSize(total_bytes)
> FROM system.tables
> WHERE engine IN ('Memory','Set','Join');
>
> SELECT
> 'primary_key_bytes_in_memory' as pk_in_memory, formatReadableSize(sum(primary_key_bytes_in_memory)) AS primary_key_bytes_in_memory,
> 'primary_key_bytes_in_memory_allocated' as pk_allocated_in_memory, formatReadableSize(sum(primary_key_bytes_in_memory_allocated)) AS primary_key_bytes_in_memory_allocated
> FROM system.parts;
>
> SELECT
> metric,
> formatReadableSize(value)
> FROM system.asynchronous_metrics
> WHERE metric IN ('UncompressedCacheBytes', 'MarkCacheBytes');";
> }
# ch_query;\
> ps -q $(pidof clickhouse-server) -o rss=;\
> for i in {1..500};
> do for j in {1..20};
> do
> (curl -m1 -s -F '[email protected];' 'http://127.0.0.1:8123/?readonly=1&cancel_http_readonly_queries_on_client_close=1&metrics_list_structure=Path+String&query=SELECT+Path,+groupArray(Time),+groupArray(Value),+groupArray(Timestamp)+FROM+db.points+PREWHERE+Date+>=toDate(now()-interval+7+day)+AND+Date+<=+toDate(now())+WHERE+(Path+in+metrics_list)+AND+(Time+>=+toUnixTimestamp(now()-interval+7+day)+AND+Time+<=+toUnixTimestamp(now()))+GROUP+BY+Path+FORMAT+RowBinary' --output /tmp/query_result &>/dev/null & );
> done;
> sleep 1
> if [[ $((${i}%25)) -eq 0 ]]; then
> sleep 30; ps -q $(pidof clickhouse-server) -o rss=;
> fi;
> done;\
> sleep 60;\
> ch_query;\
> ps -q $(pidof clickhouse-server) -o rss=;
Allocated by dict 0.00 B
primary_key_bytes_in_memory 8.69 GiB primary_key_bytes_in_memory_allocated 12.44 GiB
MarkCacheBytes 0.00 B
UncompressedCacheBytes 0.00 B
10891440
12313108
13180236
14087976
14972784
15837392
16788848
17643728
18468476
19435812
20124688
20968536
21812532
22665680
23506580
24395620
25245124
26062700
26964264
27797412
28368952
Allocated by dict 0.00 B
primary_key_bytes_in_memory 8.71 GiB primary_key_bytes_in_memory_allocated 12.47 GiB
MarkCacheBytes 179.05 MiB
UncompressedCacheBytes 0.00 B
28365908
``` | https://github.com/ClickHouse/ClickHouse/issues/32750 | https://github.com/ClickHouse/ClickHouse/pull/32982 | d304d3c7b421b046bf55f3e28b80a4a9f4b9fed4 | 8c9843caf2cf4cfbb46e9996e70d71f70d24eea8 | "2021-12-14T14:22:14Z" | c++ | "2021-12-23T04:58:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,744 | ["src/Interpreters/SelectQueryOptions.h", "src/Interpreters/TreeRewriter.cpp", "src/Storages/ProjectionsDescription.cpp", "tests/integration/test_storage_rabbitmq/test.py"] | Column X is not under aggregate function and not in GROUP BY | After upgrading to `21.11.6.7` from `21.6.6.51` some of our `INSERT`s start failing with the following error:
```
2021.12.14 09:54:05.538718 [ 373 ] {} <Error> void DB::StorageBuffer::backgroundFlush(): Code: 215. DB::Exception: Column `name` is not under aggregate function and not in GROUP BY: While processing name, toStartOfInterval(platform_time, toIntervalHour(1)) AS platform_time, anyLastSimpleState(publisher) AS publisher, if((sumSimpleStateIf(1, isFinite(value)) AS count) > 0, argMinStateIf(value, platform_time, isFinite(value)), argMinState(nan, fromUnixTimestamp64Nano(CAST(9223372036854775807, 'Int64')))) AS open, if(count > 0, argMaxStateIf(value, platform_time, isFinite(value)), argMaxState(nan, fromUnixTimestamp64Nano(CAST(0, 'Int64')))) AS close, coalesce(minSimpleStateOrNullIf(value, isFinite(value)), inf) AS min, coalesce(maxSimpleStateOrNullIf(value, isFinite(value)), -inf) AS max, sumSimpleStateIf(value, isFinite(value)) AS sum, sumSimpleStateIf(value * value, isFinite(value)) AS sum_of_squares, count, sumSimpleStateIf(1, NOT isFinite(value)) AS non_finite_count, argMaxState(NOT isFinite(value), platform_time) AS ends_with_non_finite. (NOT_AN_AGGREGATE), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9b722d4 in /usr/bin/clickhouse
1. DB::ActionsMatcher::visit(DB::ASTIdentifier const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x120cd5f1 in /usr/bin/clickhouse
2. DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x120d5318 in /usr/bin/clickhouse
3. DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x120a46b7 in /usr/bin/clickhouse
4. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x120a44cb in /usr/bin/clickhouse
5. DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x120aef09 in /usr/bin/clickhouse
6. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x120b3b70 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x1233a78d in /usr/bin/clickhouse
8. ? @ 0x123332e4 in /usr/bin/clickhouse
9. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > >) @ 0x1232dd47 in /usr/bin/clickhouse
10. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1232c6d4 in /usr/bin/clickhouse
11. DB::buildPushingToViewsChain(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IAST> const&, bool, DB::ThreadStatus*, std::__1::atomic<unsigned long>*, DB::Block const&) @ 0x13362587 in /usr/bin/clickhouse
12. DB::InterpreterInsertQuery::buildChainImpl(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::Block const&, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) @ 0x12312b7b in /usr/bin/clickhouse
13. DB::InterpreterInsertQuery::execute() @ 0x12314f89 in /usr/bin/clickhouse
14. DB::StorageBuffer::writeBlockToDestination(DB::Block const&, std::__1::shared_ptr<DB::IStorage>) @ 0x12be8091 in /usr/bin/clickhouse
15. DB::StorageBuffer::flushBuffer(DB::StorageBuffer::Buffer&, bool, bool, bool) @ 0x12be5667 in /usr/bin/clickhouse
16. DB::StorageBuffer::backgroundFlush() @ 0x12be8e85 in /usr/bin/clickhouse
17. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x11f0eb6e in /usr/bin/clickhouse
18. DB::BackgroundSchedulePool::threadFunction() @ 0x11f11547 in /usr/bin/clickhouse
19. ? @ 0x11f124b3 in /usr/bin/clickhouse
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9bb4397 in /usr/bin/clickhouse
21. ? @ 0x9bb7d9d in /usr/bin/clickhouse
22. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
23. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.11.6.7 (official build))
```
Maybe related to https://github.com/ClickHouse/ClickHouse/pull/28502
## Tables
### indicator_log_buffer
The table that handles `INSERT`s.
```sql
CREATE TABLE indicator_log_buffer AS indicator_log
ENGINE = Buffer(
currentDatabase(), indicator_log,
/* num_layers */ 1,
/* min_time/max_time (sec) */ 1, 1,
/* min_rows/max_rows */ 1000000, 1000000,
/* min_bytes/max_bytes */ 100000000, 100000000
);
```
### indicator_log
The underlying table of `indicator_log_buffer`.
```sql
CREATE TABLE indicator_log (
name String,
platform_time DateTime64(9),
trace_id UInt64,
value Float64,
publisher LowCardinality(String),
INDEX trace_id_index trace_id TYPE minmax GRANULARITY 3
)
ENGINE = ReplacingMergeTree
PARTITION BY toYYYYMM(platform_time)
ORDER BY (name, platform_time);
```
### indicator_xxx_log
The aggregated table.
```sql
CREATE TABLE indicator_1sec_log (
name LowCardinality(String),
platform_time DateTime64(9),
publisher SimpleAggregateFunction(anyLast, LowCardinality(String)),
open AggregateFunction(argMin, Float64, DateTime64(9)),
close AggregateFunction(argMax, Float64, DateTime64(9)),
min SimpleAggregateFunction(min, Float64),
max SimpleAggregateFunction(max, Float64),
sum SimpleAggregateFunction(sum, Float64),
sum_of_squares SimpleAggregateFunction(sum, Float64),
count SimpleAggregateFunction(sum, UInt64),
non_finite_count SimpleAggregateFunction(sum, UInt64),
ends_with_non_finite AggregateFunction(argMax, UInt8, DateTime64(9))
)
ENGINE = AggregatingMergeTree
PARTITION BY toYYYYMM(platform_time)
ORDER BY (name, platform_time);
CREATE MATERIALIZED VIEW indicator_1sec_log_handler TO indicator_1sec_log
AS SELECT
name, -- << ERROR
toStartOfInterval(platform_time, INTERVAL 1 second) AS platform_time,
anyLastSimpleState(publisher) AS publisher,
count > 0 ? argMinStateIf(value, indicator_log.platform_time, isFinite(value))
: argMinState(nan, fromUnixTimestamp64Nano(CAST(9223372036854775807, 'Int64'))) AS open,
count > 0 ? argMaxStateIf(value, indicator_log.platform_time, isFinite(value))
: argMaxState(nan, fromUnixTimestamp64Nano(CAST(0, 'Int64'))) AS close,
coalesce(minSimpleStateOrNullIf(value, isFinite(value)), inf) AS min,
coalesce(maxSimpleStateOrNullIf(value, isFinite(value)), -inf) AS max,
sumSimpleStateIf(value, isFinite(value)) AS sum,
sumSimpleStateIf(value * value, isFinite(value)) AS sum_of_squares,
sumSimpleStateIf(1, isFinite(value)) AS count,
sumSimpleStateIf(1, NOT isFinite(value)) AS non_finite_count,
argMaxState(NOT isFinite(value), indicator_log.platform_time) AS ends_with_non_finite
FROM indicator_log
GROUP BY (name, platform_time);
``` | https://github.com/ClickHouse/ClickHouse/issues/32744 | https://github.com/ClickHouse/ClickHouse/pull/32751 | abbab7ff87d98e18cf2fcf1641d574edbd120058 | 2e62f086a1918b7f9b8f67bd8b0253052ef6ceff | "2021-12-14T10:13:41Z" | c++ | "2021-12-20T12:47:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,737 | ["src/Functions/fuzzBits.cpp", "tests/queries/0_stateless/02148_issue_32737.reference", "tests/queries/0_stateless/02148_issue_32737.sql"] | Crash in function `fuzzBits` | ```
SELECT fuzzBits(toFixedString('', 200), 0.5) FROM system.numbers FORMAT Null
``` | https://github.com/ClickHouse/ClickHouse/issues/32737 | https://github.com/ClickHouse/ClickHouse/pull/32755 | 4adf3b02855ac1ca9fbb3f0e5fe58f16522327f0 | de66f669b6a7af5fcef3b21cc047439c5cc4e782 | "2021-12-14T02:23:52Z" | c++ | "2021-12-15T00:58:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,668 | ["src/Interpreters/SelectQueryOptions.h", "src/Interpreters/TreeRewriter.cpp", "src/Storages/ProjectionsDescription.cpp", "tests/integration/test_storage_rabbitmq/test.py"] | ReplacingMergeTree Materialized view based on kafka topics becoming empty after migrating to 21.11 | **Describe what's wrong**
After Moving to 21.11, [ReplacingMergeTree-style Materialized view](https://github.com/tiniumv/clickhouse_bug/blob/main/migrations/add_view.sql) which is getting data from [two other views](https://github.com/tiniumv/clickhouse_bug/blob/main/migrations/basic.sql) based on kafka topics stopped to receive any data.
Same requst ```select * from causing_problems_mv;```
on 21.10
```
┌─field_a─┬─field_b─┬─field_c─┬──max_field_datetime─┐
│ 1 │ type_a │ 1 │ 2021-12-01 14:00:00 │
│ 2 │ type_a │ 2 │ 2021-12-01 13:00:00 │
└─────────┴─────────┴─────────┴─────────────────────┘
```
and on 21.11
```
0 rows in set. Elapsed: 0.002 sec.
```
Moreover, one of the source view (the one from left part of ```LEFT JOIN``` expression) went broken too. That was isolated - I tried to not create final view under 21.11, and it resulted in basic views working normally.
**Does it reproduce on recent release?**
Yes.
**How to reproduce**
Repository with everything needed to replicate bug: https://github.com/tiniumv/clickhouse_bug . There are two branches, main (with 21.10 CH) and 21.11
**Expected behavior**
I was expecting result of provided queries to be identical independently from ClickHouse version.
**Error message and/or stacktrace**
[I have collected all logs](https://raw.githubusercontent.com/tiniumv/clickhouse_bug/21.11/logs/clickhouse-server.err.log)
it says that ```Column `field_a` is not under aggregate function and not in GROUP BY: While processing field_a, field_b, argMax(field_c, field_datetime) AS field_c, max(field_datetime) AS max_field_datetime.```, but it clearly is
**Additional context**
While request ```select * from table_ii;``` returns nothing, ```select * from table_ii_queue;``` spits out all lost messages
| https://github.com/ClickHouse/ClickHouse/issues/32668 | https://github.com/ClickHouse/ClickHouse/pull/32751 | abbab7ff87d98e18cf2fcf1641d574edbd120058 | 2e62f086a1918b7f9b8f67bd8b0253052ef6ceff | "2021-12-13T10:44:02Z" | c++ | "2021-12-20T12:47:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,593 | ["src/Coordination/KeeperServer.cpp"] | Frightening log message from NuRaft | `2021.12.12 04:24:12.020896 [ 491 ] {} <Information> RaftInstance: parameters: timeout 0 - 0, heartbeat 0, leadership expiry 0, max batch 100, backoff 50, snapshot distance 10000, log sync stop gap 99999, reserved logs -1530494976, client timeout
10000, auto forwarding ON, API call type ASYNC, custom commit quorum size 0, custom election quorum size 0, snapshot receiver INCLUDED, leadership transfer wait time 0, grace period of lagging state machine 0, snapshot IO: BLOCKING`
Why `reserved logs -1530494976` (negative number)?
`timeout 0 - 0` - not obvious what does it mean.
`snapshot receiver INCLUDED` - it is screaming for what? | https://github.com/ClickHouse/ClickHouse/issues/32593 | https://github.com/ClickHouse/ClickHouse/pull/33224 | ba587c16a2ca37ab6d01baa8880aa6dfe86db8da | e0fe50444366edac344d6510458f51773a3991c5 | "2021-12-12T02:41:22Z" | c++ | "2021-12-28T08:17:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,586 | ["programs/benchmark/Benchmark.cpp"] | Pressing Ctrl+C twice should terminate `clickhouse-benchmark` immediately. | **Describe the issue**
```
Queries executed: 0.
Queries executed: 0.
^CStopping launch of queries. SIGINT received.
^C^C^C^C^C^C^C^C^C^C^C
``` | https://github.com/ClickHouse/ClickHouse/issues/32586 | https://github.com/ClickHouse/ClickHouse/pull/33303 | fa934d673df0116763d3006d373d7cd552b6b0ec | 2f121fd79a5d693d5f7e484694799c599512c6c2 | "2021-12-11T22:09:18Z" | c++ | "2021-12-30T08:00:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,512 | ["src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/02147_order_by_optimizations.reference", "tests/queries/0_stateless/02147_order_by_optimizations.sql"] | `optimize_monotonous_functions_in_order_by` optimization breaks matching of primary key for `optimize_read_in_order` optimization in `Distributed` tables |
**How to reproduce**
```sql
CREATE TABLE t_local
(
`d` DateTime,
`v` UInt32
)
ENGINE = MergeTree
ORDER BY toStartOfHour(d);
CREATE TABLE t_dist AS t_local
ENGINE = Distributed(test_shard_localhost, currentDatabase(), t_local);
INSERT INTO t_local VALUES (now(), 1);
```
```sql
EXPLAIN PIPELINE
SELECT v
FROM t_dist
ORDER BY
toStartOfHour(d) ASC,
v ASC
SETTINGS optimize_monotonous_functions_in_order_by = 0
┌─explain────────────────────────────┐
│ (SettingQuotaAndLimits) │
│ (Expression) │
│ ExpressionTransform │
│ (Sorting) │
│ -> FinishSortingTransform <- │
│ PartialSortingTransform │
│ (Expression) │
│ ExpressionTransform │
│ (SettingQuotaAndLimits) │
│ (ReadFromMergeTree) │
│ MergeTreeInOrder 0 → 1 │
└────────────────────────────────────┘
EXPLAIN PIPELINE
SELECT v
FROM t_dist
ORDER BY
toStartOfHour(d) ASC,
v ASC
SETTINGS optimize_monotonous_functions_in_order_by = 1
┌─explain──────────────────────────────┐
│ (SettingQuotaAndLimits) │
│ (Expression) │
│ ExpressionTransform │
│ (Sorting) │
│ -> MergeSortingTransform <- │
│ LimitsCheckingTransform │
│ PartialSortingTransform │
│ (Expression) │
│ ExpressionTransform │
│ (SettingQuotaAndLimits) │
│ (ReadFromMergeTree) │
│ MergeTreeInOrder 0 → 1 │
└──────────────────────────────────────┘
EXPLAIN SYNTAX
SELECT v
FROM t_dist
ORDER BY
toStartOfHour(d) ASC,
v ASC
SETTINGS optimize_monotonous_functions_in_order_by = 1
Query id: 1add8abb-1f3c-4f50-82df-cadcf162358d
┌─explain────────────────────────────────────────────────┐
│ SELECT v │
│ FROM t_dist │
│ ORDER BY │
│ d ASC, │
│ v ASC │
│ SETTINGS optimize_monotonous_functions_in_order_by = 1 │
└────────────────────────────────────────────────────────┘
```
It happens, because `optimize_monotonous_functions_in_order_by` eliminates monotonous function from `ORDER BY`, while it is used for matching the prefix of primary key and `optimize_read_in_order` doesn't work. | https://github.com/ClickHouse/ClickHouse/issues/32512 | https://github.com/ClickHouse/ClickHouse/pull/32670 | 6879f03cb6f35613761eb272a2f90a2f96c3a900 | dadaeabda7a775d81590f8a41bdbf9a9e22c8075 | "2021-12-10T15:26:47Z" | c++ | "2021-12-13T21:42:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,504 | ["src/IO/ZlibDeflatingWriteBuffer.cpp", "src/QueryPipeline/BlockIO.h"] | Insert to S3 with multipart upload to GCS fails + crashes clickhouse-server with SIGABRT | **Description**
Server crashes when using INSERT in S3 storage with gzip and error received from S3 provider.
Google Cloud Store has limitation for S3 API support.
E.g. it sends error if trying to use multipartUploads with Amazon S3 API.
This is Google limitation due to it expects in all requests in multipart upload, including final request, to supply the same customer-supplied encryption key.
(https://cloud.google.com/storage/docs/migrating#methods-comparison)
So it's ok that Clickhouse generates error when trying to insert large file to GCloud S3.
However not ok that once such error received, clickhouse server crashes with SIGABRT.
Also noticed that when using send w/o compression, only error generated and request rejected w/o crash.
**Version**
21.11.5.33
**How to reproduce**
- Create bucket in Google Cloud Store with service account and HMAC keys to support S3 API.
- Send large file so that multipart upload triggered:
```
INSERT INTO FUNCTION s3('https://storage.googleapis.com/<bucket>/events.ndjson.gz',
'<key>',
'<secret>',
'JSONEachRow', 'ts_server DateTime, event_id String', 'gzip')
SELECT
now() ts_server,
generateUUIDv4() event_id
FROM system.numbers
LIMIT 10000000;
```
**Expected behavior**
Prevent server crash on any error from S3 API, just reject the INSERT request.
**Additional desired behavior**
Provide ability to configure multipart upload disabling
e.g. by setting s3_max_single_part_upload_size to some
unbounded value e.g 0 or -1 or by using some other config
**Error message and/or stacktrace**
Sample logs attached: crash.log - when using gzip, error.log when using w/o gzip.
[logs.zip](https://github.com/ClickHouse/ClickHouse/files/7692298/logs.zip)
| https://github.com/ClickHouse/ClickHouse/issues/32504 | https://github.com/ClickHouse/ClickHouse/pull/32649 | b4f3600e84ccd256709256012a69814e5b86665b | 730c16bd0c638b24a4ee3147fbd7245ac2cd224b | "2021-12-10T11:21:13Z" | c++ | "2021-12-13T18:05:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,487 | ["src/AggregateFunctions/AggregateFunctionTopK.cpp", "src/AggregateFunctions/AggregateFunctionTopK.h", "tests/queries/0_stateless/02149_issue_32487.reference", "tests/queries/0_stateless/02149_issue_32487.sql"] | topKWeightedState fails for some input types | **Describe what's wrong**
> A clear and concise description of what works not as it is supposed to.
Recent versions of topKWeightedState fail on some input types.
OK:
```sql
SELECT topKWeightedState(2)(1, 1)
Query id: d74c6d5b-f4d7-49a4-a0b4-93ef0916f5b4
┌─topKWeightedState(2)(1, 1)─┐
│ @ │
└────────────────────────────┘
1 rows in set. Elapsed: 0.003 sec.
```
NOT OK:
```sql
SELECT topKWeightedState(2)(now(), 1)
Query id: 5842828e-5dfc-47ea-b52d-ebfa498b1c92
Exception on client:
Code: 42. DB::Exception: Aggregate function topKWeighted requires two arguments: while receiving packet from 0.0.0.0:9000. (NUMBER_OF_ARGUMENTS_DOESNT_MATCH)
Connecting to 0.0.0.0:9000 as user default.
Connected to ClickHouse server version 21.11.5 revision 54450.
```
When looking for the types:
```sql
DESCRIBE TABLE
(
SELECT
topKWeightedState(2)(1, 1),
topKWeightedState(2)(now(), 1)
)
FORMAT Vertical
Query id: 62814122-3c1c-470a-b461-6e3b90dabb81
Row 1:
──────
name: topKWeightedState(2)(1, 1)
type: AggregateFunction(topKWeighted(2), UInt8, UInt8)
default_type:
default_expression:
comment:
codec_expression:
ttl_expression:
Row 2:
──────
name: topKWeightedState(2)(now(), 1)
type: AggregateFunction(topKWeighted(2), DateTime)
default_type:
default_expression:
comment:
codec_expression:
ttl_expression:
2 rows in set. Elapsed: 0.003 sec.
```
The `topKWeightedState(2)(now(), 1)` misses the weight type in the aggregate function.
**Does it reproduce on recent release?**
Yes. It happens in `21.11.5.33`. It also happens in `21.9.6.24`, `21.8.11.4`, `21.7.11.3`, and `21.5.9.4`. It works for version `21.3.8.76` and for versions older than that.
**How to reproduce**
With version `21.11.5.33`, run the query:
```
SELECT topKWeightedState(2)(now(), 1)
Query id: dc844e76-250d-42f4-ac67-1ff3c61f57c0
Exception on client:
Code: 42. DB::Exception: Aggregate function topKWeighted requires two arguments: while receiving packet from 0.0.0.0:9000. (NUMBER_OF_ARGUMENTS_DOESNT_MATCH)
```
**Expected behavior**
I would expect to have the behavior found in older releases or the one that you have for some other types:
```sql
SELECT
topKWeightedState(2)(now(), 1),
topKWeightedState(2)(1, 1)
┌─topKWeightedState(2)(now(), 1)─┬─topKWeightedState(2)(1, 1)─┐
│ �k�a@ │ @ │
└────────────────────────────────┴────────────────────────────┘
1 rows in set. Elapsed: 0.016 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/32487 | https://github.com/ClickHouse/ClickHouse/pull/32914 | 5cd56b127f2dad91e4367607298e797d30683c32 | 778cd76987a8222217f7bf9f0c8fcb692b82b673 | "2021-12-09T20:50:51Z" | c++ | "2021-12-18T07:18:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,480 | ["src/Common/SparseHashMap.h", "src/Dictionaries/HashedArrayDictionary.h", "src/Dictionaries/HashedDictionary.h"] | Hierarchical dictionaries with sparse_hashed layout can't be loaded | **Describe what's wrong**
Hierarchical dictionaries with sparse_hashed layout can't be loaded (take too long time).
Works fine in 21.3. Does not work in 21.11.
**Does it reproduce on recent release?**
Yes. 21.11.5.33
**How to reproduce**
Dictionary creation
```
drop table if exists default.hierdictsrc;
create table default.hierdictsrc (parent UInt64, child UInt64) Engine=MergeTree ORDER BY tuple();
insert into default.hierdictsrc
with 80 as breadth
select parent, arrayJoin(child)+parent*breadth as child from (
select arrayJoin(range(0,breadth)) as parent, range(0,breadth) as child
)
UNION ALL
with 80 as breadth
select parent, arrayJoin(child)+parent*breadth as child from (
select arrayJoin(range(breadth,breadth*breadth)) as parent, range(0,breadth) as child
)
UNION ALL
with 80 as breadth
select parent, arrayJoin(child)+parent*breadth as child from (
select arrayJoin(range(breadth*breadth,breadth*breadth*breadth)) as parent, range(0,breadth) as child
)
;
select count(*) from default.hierdictsrc ;
drop dictionary if exists default.hierdict;
CREATE DICTIONARY default.hierdict
(
`child` UInt64,
`parent` UInt64 HIERARCHICAL
)
PRIMARY KEY child
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' DB 'default' TABLE 'hierdictsrc'))
LIFETIME(MIN 300 MAX 86400)
LAYOUT(SPARSE_HASHED());
system reload dictionary default.hierdict;
```
v21.3 Loads it correctly:
```
server21.3 :) system reload dictionary default.hierdict;
SYSTEM RELOAD DICTIONARY default.hierdict
Query id: 504c1c83-ebab-4445-9654-6b5656104067
[server21.3] 2021.12.09 20:35:06.688338 [ 15173 ] {504c1c83-ebab-4445-9654-6b5656104067} <Debug> executeQuery: (from [::ffff:10.10.10.134]:50834, using production parser) system reload dictionary default.hierdict;
[server21.3] 2021.12.09 20:35:06.688964 [ 15173 ] {504c1c83-ebab-4445-9654-6b5656104067} <Trace> ContextAccess (default): Access granted: SYSTEM RELOAD DICTIONARY ON *.*
[server21.3] 2021.12.09 20:35:06.689516 [ 15173 ] {504c1c83-ebab-4445-9654-6b5656104067} <Trace> ExternalDictionariesLoader: Will load the object 'default.hierdict' in background, force = true, loading_id = 21682
[server21.3] 2021.12.09 20:35:49.375847 [ 15173 ] {504c1c83-ebab-4445-9654-6b5656104067} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
Ok.
```
v21.11 loads infinitely and never finishes. The table gets locked and the only way is to drop table is to restart clickhouse (Engine ordinary).
```
server21.11 :) system reload dictionary default.hierdict;
SYSTEM RELOAD DICTIONARY default.hierdict
Query id: c766d74a-e2ce-4694-a31c-cdecfd40765d
[server21.11] 2021.12.09 20:31:57.707631 [ 11723 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> executeQuery: (from [::1]:59362) system reload dictionary default.hierdict;
Timeout exceeded while receiving data from server. Waited for 300 seconds, timeout is 300 seconds.
Cancelling query.
```
Last related entries in log
```
2021.12.09 20:31:57.707631 [ 11723 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> executeQuery: (from [::1]:59362) system reload dictionary default.hierdict;
2021.12.09 20:31:57.708693 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> LOCAL-Session: 32f9972c-c11e-4648-b2f9-972cc11e1648 Authenticating user 'default' from 127.0.0.1:0
2021.12.09 20:31:57.708727 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> LOCAL-Session: 32f9972c-c11e-4648-b2f9-972cc11e1648 Authenticated with global context as user 94309d50-4f52-525
0-31bd-74fecac179db
2021.12.09 20:31:57.708754 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> LOCAL-Session: 32f9972c-c11e-4648-b2f9-972cc11e1648 Creating query context from global context, user_id: 94309d
50-4f52-5250-31bd-74fecac179db, parent context user: <NOT SET>
2021.12.09 20:31:57.709145 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> executeQuery: (internal) SELECT `child`, `parent` FROM `default`.`hierdictsrc`;
2021.12.09 20:31:57.709549 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> default.hierdictsrc (SelectExecutor): Key condition: unknown
2021.12.09 20:31:57.709857 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> default.hierdictsrc (SelectExecutor): Selected 2/2 parts by partition key, 2 parts by primary key, 5001/5001 ma
rks by primary key, 5001 marks to read from 2 ranges
2021.12.09 20:31:57.709986 [ 11747 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> default.hierdictsrc (SelectExecutor): Reading approx. 40960000 rows with 8 streams
....
2021.12.09 20:46:17.689121 [ 11723 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Debug> MemoryTracker: Peak memory usage (for query): 8.51 MiB.
2021.12.09 20:46:17.689923 [ 11723 ] {c766d74a-e2ce-4694-a31c-cdecfd40765d} <Error> executeQuery: Code: 210. DB::NetException: I/O error: Broken pipe, while writing to socket ([::1]:59362). (NETWORK_ERROR) (version 21.11.5.33 (official build)) (from [::1]:59362) (in query: system reload dictionary default.hierdict;), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9b737d4 in /usr/bin/clickhouse
1. DB::WriteBufferFromPocoSocket::nextImpl() @ 0x11f84a5c in /usr/bin/clickhouse
2. DB::TCPHandler::sendEndOfStream() @ 0x13128029 in /usr/bin/clickhouse
3. DB::TCPHandler::runImpl() @ 0x1311f861 in /usr/bin/clickhouse
4. DB::TCPHandler::run() @ 0x13132fd9 in /usr/bin/clickhouse
5. Poco::Net::TCPServerConnection::start() @ 0x15d6e96f in /usr/bin/clickhouse
6. Poco::Net::TCPServerDispatcher::run() @ 0x15d70d61 in /usr/bin/clickhouse
7. Poco::PooledThread::run() @ 0x15e85709 in /usr/bin/clickhouse
8. Poco::ThreadImpl::runnableEntry(void*) @ 0x15e82e40 in /usr/bin/clickhouse
9. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
10. __clone @ 0xfe8dd in /usr/lib64/libc-2.17.so
```
WHen dictionary is using layout HASHED - it works fine in 21.11 too.
```
drop table if exists default.hierdictsrc;
create table default.hierdictsrc (parent UInt64, child UInt64, lvl UInt8) Engine=MergeTree ORDER BY tuple();
insert into default.hierdictsrc
with 80 as breadth
select parent, arrayJoin(child)+parent*breadth as child, 1 as lvl from (
select arrayJoin(range(0,breadth)) as parent, range(0,breadth) as child
)
UNION ALL
with 80 as breadth
select parent, arrayJoin(child)+parent*breadth as child, 2 as lvl from (
select arrayJoin(range(breadth,breadth*breadth)) as parent, range(0,breadth) as child
)
UNION ALL
with 80 as breadth
select parent, arrayJoin(child)+parent*breadth as child, 3 as lvl from (
select arrayJoin(range(breadth*breadth,breadth*breadth*breadth)) as parent, range(0,breadth) as child
)
;
select count(*) from default.hierdictsrc ;
drop dictionary if exists default.hierdict;
CREATE DICTIONARY default.hierdict
(
`child` UInt64,
`parent` UInt64 HIERARCHICAL
)
PRIMARY KEY child
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' DB 'default' TABLE 'hierdictsrc'))
LIFETIME(MIN 300 MAX 86400)
LAYOUT(HASHED());
system reload dictionary default.hierdict;
```
Result:
```
server21.11 :) system reload dictionary default.hierdict;
SYSTEM RELOAD DICTIONARY default.hierdict
Query id: a8eab9d5-f75f-453d-86d9-41374c677fbb
[server21.11] 2021.12.09 20:56:49.091200 [ 11723 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> executeQuery: (from [::1]:59370) system reload dictionary default.hierdict;
[server21.11] 2021.12.09 20:56:49.091329 [ 11723 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): Access granted: SYSTEM RELOAD DICTIONARY ON *.*
[server21.11] 2021.12.09 20:56:49.091524 [ 11723 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ExternalDictionariesLoader: Will load the object 'default.hierdict' in background, force = true, loading_id = 7
[server21.11] 2021.12.09 20:56:49.091640 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ExternalDictionariesLoader: Start loading object 'default.hierdict'
[server21.11] 2021.12.09 20:56:49.091943 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> LOCAL-Session: 37124aa6-fc39-4b1f-b712-4aa6fc39eb1f Authenticating user 'default' from 127.0.0.1:0
[server21.11] 2021.12.09 20:56:49.091996 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> LOCAL-Session: 37124aa6-fc39-4b1f-b712-4aa6fc39eb1f Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
[server21.11] 2021.12.09 20:56:49.092026 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> LOCAL-Session: 37124aa6-fc39-4b1f-b712-4aa6fc39eb1f Creating query context from global context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: <NOT SET>
[server21.11] 2021.12.09 20:56:49.092074 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=true
[server21.11] 2021.12.09 20:56:49.092091 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, MOVE PARTITION BETWEEN SHARDS, SYSTEM, dictGet, INTROSPECTION, SOURCES ON *.*
[server21.11] 2021.12.09 20:56:49.092104 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): List of all grants including implicit: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, MOVE PARTITION BETWEEN SHARDS, SYSTEM, dictGet, INTROSPECTION, SOURCES ON *.*
[server21.11] 2021.12.09 20:56:49.092140 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=true
[server21.11] 2021.12.09 20:56:49.092160 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): List of all grants: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, MOVE PARTITION BETWEEN SHARDS, SYSTEM, dictGet, INTROSPECTION, SOURCES ON *.*
[server21.11] 2021.12.09 20:56:49.092190 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): List of all grants including implicit: GRANT SHOW, SELECT, INSERT, ALTER, CREATE, DROP, TRUNCATE, OPTIMIZE, KILL QUERY, MOVE PARTITION BETWEEN SHARDS, SYSTEM, dictGet, INTROSPECTION, SOURCES ON *.*
[server21.11] 2021.12.09 20:56:49.092255 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> DictionaryFactory: Created dictionary source 'ClickHouse: default.hierdictsrc' for dictionary 'default.hierdict'
[server21.11] 2021.12.09 20:56:49.092408 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> executeQuery: (internal) SELECT `child`, `parent` FROM `default`.`hierdictsrc`;
[server21.11] 2021.12.09 20:56:49.092711 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ContextAccess (default): Access granted: SELECT(parent, child) ON default.hierdictsrc
[server21.11] 2021.12.09 20:56:49.092778 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
[server21.11] 2021.12.09 20:56:49.092891 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> default.hierdictsrc (SelectExecutor): Key condition: unknown
[server21.11] 2021.12.09 20:56:49.093237 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> default.hierdictsrc (SelectExecutor): Selected 2/2 parts by partition key, 2 parts by primary key, 5001/5001 marks by primary key, 5001 marks to read from 2 ranges
[server21.11] 2021.12.09 20:56:49.093373 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> default.hierdictsrc (SelectExecutor): Reading approx. 40960000 rows with 8 streams
[server21.11] 2021.12.09 20:56:51.649314 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> MemoryTracker: Current memory usage (for query): 1.01 GiB.
[server21.11] 2021.12.09 20:56:54.239137 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> MemoryTracker: Current memory usage (for query): 2.01 GiB.
[server21.11] 2021.12.09 20:56:55.988792 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ExternalDictionariesLoader: Supposed update time for 'default.hierdict' is 2021-12-10 10:26:52 (loaded, lifetime [300, 86400], no errors)
[server21.11] 2021.12.09 20:56:55.988807 [ 11831 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Trace> ExternalDictionariesLoader: Next update time for 'default.hierdict' was set to 2021-12-10 10:26:52
[server21.11] 2021.12.09 20:56:55.988982 [ 11723 ] {a8eab9d5-f75f-453d-86d9-41374c677fbb} <Debug> MemoryTracker: Peak memory usage (for query): 2.01 GiB.
Ok.
0 rows in set. Elapsed: 6.899 sec.
```
**Expected behavior**
Sparse hashed layout dictionary shall work fine as they used to work in 21.3.
| https://github.com/ClickHouse/ClickHouse/issues/32480 | https://github.com/ClickHouse/ClickHouse/pull/32536 | bda0cc2f762a89f570d1b05e3847a4c051acffa2 | 233505b665f4f266405289246fd99a3775188742 | "2021-12-09T19:00:07Z" | c++ | "2021-12-14T14:44:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,474 | ["tests/queries/0_stateless/02804_intersect_bad_cast.reference", "tests/queries/0_stateless/02804_intersect_bad_cast.sql"] | AggregationCommon.h:97:35: runtime error: downcast of address which does not point to an object of type ColumnVectorHelper | https://s3.amazonaws.com/clickhouse-test-reports/0/5b06e30ea2cb7d5abdfec2ae1bbaba0603aed09c/fuzzer_astfuzzerubsan,actions//report.html
```
2021.12.09 19:08:59.387460 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Debug> executeQuery: (from [::ffff:127.0.0.1]:60978) SELECT 2., * FROM (SELECT 1024, 256 INTERSECT SELECT 100 AND inf, 256)
2021.12.09 19:08:59.387670 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.12.09 19:08:59.387839 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.12.09 19:08:59.388012 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.12.09 19:08:59.388167 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.12.09 19:08:59.388334 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.12.09 19:08:59.388407 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.12.09 19:08:59.388494 [ 154 ] {fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
../src/Interpreters/AggregationCommon.h:97:35: runtime error: downcast of address 0x7f848038f4a0 which does not point to an object of type 'const DB::ColumnVectorHelper'
0x7f848038f4a0: note: object is of type 'DB::ColumnConst'
00 00 00 00 10 e0 58 09 00 00 00 00 01 00 00 00 00 00 00 00 80 0b a5 f2 83 7f 00 00 01 00 00 00
^~~~~~~~~~~~~~~~~~~~~~~
vptr for 'DB::ColumnConst'
2021.12.09 19:09:00.386462 [ 179 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 1442 entries to flush up to offset 307464
2021.12.09 19:09:00.388084 [ 179 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 88.39 GiB.
2021.12.09 19:09:00.388855 [ 179 ] {} <Trace> system.asynchronous_metric_log (746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e): Renaming temporary part tmp_insert_202112_213_213_0 to 202112_213_213_0.
2021.12.09 19:09:00.389025 [ 282 ] {} <Debug> system.asynchronous_metric_log (746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e) (MergerMutator): Selected 2 parts from 202112_1_208_77 to 202112_209_209_0
2021.12.09 19:09:00.389061 [ 179 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log up to offset 307464
2021.12.09 19:09:00.389084 [ 282 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 88.39 GiB.
2021.12.09 19:09:00.389229 [ 114 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Debug> MergeTask::PrepareStage: Merging 2 parts: from 202112_1_208_77 to 202112_209_209_0 into Compact
2021.12.09 19:09:00.389341 [ 114 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2021.12.09 19:09:00.389440 [ 114 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Debug> MergeTreeSequentialSource: Reading 36 marks from part 202112_1_208_77, total 300254 rows starting from the beginning of the part
2021.12.09 19:09:00.389642 [ 114 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202112_209_209_0, total 1442 rows starting from the beginning of the part
2021.12.09 19:09:00.418689 [ 115 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 301696 rows, containing 5 columns (5 merged, 0 gathered) in 0.029498074 sec., 10227650.795099368 rows/sec., 230.88 MiB/sec.
2021.12.09 19:09:00.420247 [ 115 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Trace> system.asynchronous_metric_log (746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e): Renaming temporary part tmp_merge_202112_1_209_78 to 202112_1_209_78.
2021.12.09 19:09:00.420355 [ 115 ] {746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e::202112_1_209_78} <Trace> system.asynchronous_metric_log (746a6d9d-b6b5-4d4e-b46a-6d9db6b59d4e) (MergerMutator): Merged 2 parts: from 202112_1_208_77 to 202112_209_209_0
2021.12.09 19:09:00.420443 [ 115 ] {} <Debug> MemoryTracker: Peak memory usage Mutate/Merge: 4.00 MiB.
#0 0x1ad98ebf in void DB::fillFixedBatch<unsigned short, wide::integer<128ul, unsigned int> >(unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, DB::PODArray<wide::integer<128ul, unsigned int>, 4096ul, Allocator<false, false>, 15ul, 16ul>&, unsigned long&) (/workspace/clickhouse+0x1ad98ebf)
#1 0x1ad981d1 in void DB::packFixedBatch<wide::integer<128ul, unsigned int> >(unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, DB::PODArray<wide::integer<128ul, unsigned int>, 4096ul, Allocator<false, false>, 15ul, 16ul>&) (/workspace/clickhouse+0x1ad981d1)
#2 0x1e4828e0 in DB::ColumnsHashing::HashMethodKeysFixed<wide::integer<128ul, unsigned int>, wide::integer<128ul, unsigned int>, void, false, false, true, false>::HashMethodKeysFixed(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, std::__1::shared_ptr<DB::ColumnsHashing::HashMethodContext> const&) obj-x86_64-linux-gnu/../src/Common/ColumnsHashing.h:524:13
#3 0x1f846161 in void DB::IntersectOrExceptTransform::addToSet<DB::SetMethodKeysFixed<HashSetTable<wide::integer<128ul, unsigned int>, HashTableCell<wide::integer<128ul, unsigned int>, UInt128HashCRC32, HashTableNoState>, UInt128HashCRC32, HashTableGrower<8ul>, Allocator<true, true> >, false> >(DB::SetMethodKeysFixed<HashSetTable<wide::integer<128ul, unsigned int>, HashTableCell<wide::integer<128ul, unsigned int>, UInt128HashCRC32, HashTableNoState>, UInt128HashCRC32, HashTableGrower<8ul>, Allocator<true, true> >, false>&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, unsigned long, DB::SetVariantsTemplate<DB::NonClearableSet>&) const obj-x86_64-linux-gnu/../src/Processors/Transforms/IntersectOrExceptTransform.cpp:97:28
#4 0x1f842ca9 in DB::IntersectOrExceptTransform::accumulate(DB::Chunk) obj-x86_64-linux-gnu/../src/Processors/Transforms/IntersectOrExceptTransform.cpp:148:13
#5 0x1f8423ed in DB::IntersectOrExceptTransform::work() obj-x86_64-linux-gnu/../src/Processors/Transforms/IntersectOrExceptTransform.cpp:82:9
#6 0x1f404558 in DB::executeJob(DB::IProcessor*) obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:45:20
#7 0x1f4043e7 in DB::ExecutionThreadContext::executeTask() obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:63:9
#8 0x1f3f66c0 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:213:26
#9 0x1f3f7947 in DB::PipelineExecutor::executeSingleThread(unsigned long) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:178:5
#10 0x1f3f7947 in DB::PipelineExecutor::executeImpl(unsigned long)::$_1::operator()() const obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:311:21
#11 0x1f3f7947 in decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1
#12 0x1f3f7822 in decltype(auto) std::__1::__apply_tuple_impl<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1
#13 0x1f3f7822 in decltype(auto) std::__1::apply<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1
#14 0x1f3f7822 in ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:188:13
#15 0x1f3f7822 in decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1
#16 0xde8bd19 in std::__1::__function::__policy_func<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16
#17 0xde8bd19 in std::__1::function<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12
#18 0xde8bd19 in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:274:17
#19 0xde8e721 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:139:73
#20 0xde8e721 in decltype(std::__1::forward<void>(fp)()) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1
#21 0xde8e721 in void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5
#22 0xde8e721 in void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5
#23 0x7f85ec463608 in start_thread /build/glibc-eX1tMB/glibc-2.31/nptl/pthread_create.c:477:8
#24 0x7f85ec38a292 in __clone /build/glibc-eX1tMB/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../src/Interpreters/AggregationCommon.h:97:35 in
2021.12.09 19:09:00.694332 [ 107 ] {} <Trace> BaseDaemon: Received signal -3
2021.12.09 19:09:00.694461 [ 352 ] {} <Fatal> BaseDaemon: ########################################
2021.12.09 19:09:00.694534 [ 352 ] {} <Fatal> BaseDaemon: (version 21.13.1.44, build id: E2243E56E70863BB) (from thread 328) (query_id: fc5c622d-bb4a-4fb4-8e80-bc9dbc9a20c6) Received signal Unknown signal (-3)
2021.12.09 19:09:00.694559 [ 352 ] {} <Fatal> BaseDaemon: Sanitizer trap.
2021.12.09 19:09:00.694614 [ 352 ] {} <Fatal> BaseDaemon: Stack trace: 0xde5ceaa 0x1d37d991 0xde1d8d6 0xde2f619 0x1ad98ec0 0x1ad981d2 0x1e4828e1 0x1f846162 0x1f842caa 0x1f8423ee 0x1f404559 0x1f4043e8 0x1f3f66c1 0x1f3f7948 0x1f3f7823 0xde8bd1a 0xde8e722 0x7f85ec463609 0x7f85ec38a293
2021.12.09 19:09:00.702629 [ 352 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./obj-x86_64-linux-gnu/../src/Common/StackTrace.cpp:305: StackTrace::tryCapture()
2021.12.09 19:09:00.702653 [ 352 ] {} <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:266: StackTrace::StackTrace() @ 0xde5ceaa in /workspace/clickhouse
2021.12.09 19:09:00.720028 [ 352 ] {} <Fatal> BaseDaemon: 1. ./obj-x86_64-linux-gnu/../base/daemon/BaseDaemon.cpp:391: sanitizerDeathCallback() @ 0x1d37d991 in /workspace/clickhouse
2021.12.09 19:09:01.000105 [ 314 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.52 GiB, peak 9.57 GiB, will set to 3.58 GiB (RSS), difference: 71.23 MiB
2021.12.09 19:09:01.500850 [ 352 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xde1d8d6 in /workspace/clickhouse
2021.12.09 19:09:01.761253 [ 175 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 56 entries to flush up to offset 16247
2021.12.09 19:09:01.762261 [ 175 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 88.39 GiB.
2021.12.09 19:09:01.762942 [ 175 ] {} <Trace> system.trace_log (41464490-b3a3-4132-8146-4490b3a35132): Renaming temporary part tmp_insert_202112_198_198_0 to 202112_198_198_0.
2021.12.09 19:09:01.763152 [ 175 ] {} <Trace> SystemLog (system.trace_log): Flushed system log up to offset 16247
2021.12.09 19:09:02.000100 [ 314 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.58 GiB, peak 9.57 GiB, will set to 3.84 GiB (RSS), difference: 264.16 MiB
2021.12.09 19:09:02.267214 [ 352 ] {} <Fatal> BaseDaemon: 3. ? @ 0xde2f619 in /workspace/clickhouse
2021.12.09 19:09:03.034073 [ 352 ] {} <Fatal> BaseDaemon: 4. void DB::fillFixedBatch<unsigned short, wide::integer<128ul, unsigned int> >(unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, DB::PODArray<wide::integer<128ul, unsigned int>, 4096ul, Allocator<false, false>, 15ul, 16ul>&, unsigned long&) @ 0x1ad98ec0 in /workspace/clickhouse
2021.12.09 19:09:03.170738 [ 173 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 866 entries to flush up to offset 142134
2021.12.09 19:09:03.178071 [ 173 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 88.39 GiB.
2021.12.09 19:09:03.180819 [ 173 ] {} <Trace> system.query_thread_log (dd67b783-de12-49b6-9d67-b783de12e9b6): Renaming temporary part tmp_insert_202112_163_163_0 to 202112_163_163_0.
2021.12.09 19:09:03.181261 [ 173 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log up to offset 142134
2021.12.09 19:09:03.729417 [ 143 ] {} <Trace> system.trace_log (41464490-b3a3-4132-8146-4490b3a35132): Found 2 old parts to remove.
2021.12.09 19:09:03.729457 [ 143 ] {} <Debug> system.trace_log (41464490-b3a3-4132-8146-4490b3a35132): Removing part from filesystem 202112_1_130_32
2021.12.09 19:09:03.729736 [ 143 ] {} <Debug> system.trace_log (41464490-b3a3-4132-8146-4490b3a35132): Removing part from filesystem 202112_131_131_0
2021.12.09 19:09:03.801282 [ 352 ] {} <Fatal> BaseDaemon: 5. void DB::packFixedBatch<wide::integer<128ul, unsigned int> >(unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, DB::PODArray<wide::integer<128ul, unsigned int>, 4096ul, Allocator<false, false>, 15ul, 16ul>&) @ 0x1ad981d2 in /workspace/clickhouse
2021.12.09 19:09:03.842168 [ 352 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/ColumnsHashing.h:524: DB::ColumnsHashing::HashMethodKeysFixed<wide::integer<128ul, unsigned int>, wide::integer<128ul, unsigned int>, void, false, false, true, false>::HashMethodKeysFixed(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, std::__1::shared_ptr<DB::ColumnsHashing::HashMethodContext> const&) @ 0x1e4828e1 in /workspace/clickhouse
2021.12.09 19:09:03.862989 [ 352 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Processors/Transforms/IntersectOrExceptTransform.cpp:0: void DB::IntersectOrExceptTransform::addToSet<DB::SetMethodKeysFixed<HashSetTable<wide::integer<128ul, unsigned int>, HashTableCell<wide::integer<128ul, unsigned int>, UInt128HashCRC32, HashTableNoState>, UInt128HashCRC32, HashTableGrower<8ul>, Allocator<true, true> >, false> >(DB::SetMethodKeysFixed<HashSetTable<wide::integer<128ul, unsigned int>, HashTableCell<wide::integer<128ul, unsigned int>, UInt128HashCRC32, HashTableNoState>, UInt128HashCRC32, HashTableGrower<8ul>, Allocator<true, true> >, false>&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > const&, unsigned long, DB::SetVariantsTemplate<DB::NonClearableSet>&) const @ 0x1f846162 in /workspace/clickhouse
2021.12.09 19:09:03.882587 [ 352 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Processors/Transforms/IntersectOrExceptTransform.cpp:148: DB::IntersectOrExceptTransform::accumulate(DB::Chunk) @ 0x1f842caa in /workspace/clickhouse
2021.12.09 19:09:03.902041 [ 352 ] {} <Fatal> BaseDaemon: 9.1. inlined from ./obj-x86_64-linux-gnu/../src/Processors/Chunk.h:32: ~Chunk
2021.12.09 19:09:03.902070 [ 352 ] {} <Fatal> BaseDaemon: 9. ../src/Processors/Transforms/IntersectOrExceptTransform.cpp:82: DB::IntersectOrExceptTransform::work() @ 0x1f8423ee in /workspace/clickhouse
2021.12.09 19:09:03.907724 [ 352 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:53: DB::executeJob(DB::IProcessor*) @ 0x1f404559 in /workspace/clickhouse
2021.12.09 19:09:03.912849 [ 352 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:65: DB::ExecutionThreadContext::executeTask() @ 0x1f4043e8 in /workspace/clickhouse
2021.12.09 19:09:03.927555 [ 352 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:213: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x1f3f66c1 in /workspace/clickhouse
2021.12.09 19:09:03.943592 [ 352 ] {} <Fatal> BaseDaemon: 13.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:2851: std::__1::shared_ptr<DB::ThreadGroupStatus>::operator bool() const
2021.12.09 19:09:03.943618 [ 352 ] {} <Fatal> BaseDaemon: 13.2. inlined from ../src/Processors/Executors/PipelineExecutor.cpp:304: operator()
2021.12.09 19:09:03.943636 [ 352 ] {} <Fatal> BaseDaemon: 13.3. inlined from ../base/base/../base/scope_guard.h:94: basic_scope_guard<DB::PipelineExecutor::executeImpl(unsigned long)::$_1::operator()() const::'lambda'()>::invoke()
2021.12.09 19:09:03.943669 [ 352 ] {} <Fatal> BaseDaemon: 13.4. inlined from ../base/base/../base/scope_guard.h:44: ~basic_scope_guard
2021.12.09 19:09:03.943706 [ 352 ] {} <Fatal> BaseDaemon: 13.5. inlined from ../src/Processors/Executors/PipelineExecutor.cpp:319: operator()
2021.12.09 19:09:03.943743 [ 352 ] {} <Fatal> BaseDaemon: 13. ../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&) @ 0x1f3f7948 in /workspace/clickhouse
2021.12.09 19:09:03.959563 [ 352 ] {} <Fatal> BaseDaemon: 14.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:0: operator()
2021.12.09 19:09:03.959587 [ 352 ] {} <Fatal> BaseDaemon: 14. ../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&) @ 0x1f3f7823 in /workspace/clickhouse
2021.12.09 19:09:03.969359 [ 352 ] {} <Fatal> BaseDaemon: 15.1. inlined from ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2533: std::__1::function<void ()>::operator=(std::nullptr_t)
2021.12.09 19:09:03.969379 [ 352 ] {} <Fatal> BaseDaemon: 15. ../src/Common/ThreadPool.cpp:277: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xde8bd1a in /workspace/clickhouse
2021.12.09 19:09:03.980747 [ 352 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:0: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0xde8e722 in /workspace/clickhouse
2021.12.09 19:09:03.980771 [ 352 ] {} <Fatal> BaseDaemon: 17. ? @ 0x7f85ec463609 in ?
2021.12.09 19:09:03.980814 [ 352 ] {} <Fatal> BaseDaemon: 18. __clone @ 0x7f85ec38a293 in ?
2021.12.09 19:09:04.000093 [ 314 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 3.84 GiB, peak 9.57 GiB, will set to 3.87 GiB (RSS), difference: 32.32 MiB
2021.12.09 19:09:04.217518 [ 352 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 8142A36AB98199A5F585210D0B66F15A. There is no information about the reference checksum.
``` | https://github.com/ClickHouse/ClickHouse/issues/32474 | https://github.com/ClickHouse/ClickHouse/pull/51354 | 33d7cca9df0ed9d91c3c8ed3009f92142ce69f9d | 235328ab1dfa80b1489e5bd5adb19fed3e7673ce | "2021-12-09T17:36:06Z" | c++ | "2023-07-08T07:35:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,458 | ["src/Interpreters/join_common.cpp", "tests/queries/0_stateless/02133_issue_32458.reference", "tests/queries/0_stateless/02133_issue_32458.sql"] | Segmentation fault in JoinCommon::removeColumnNullability |
**Describe the bug**
[A link to the report](https://s3.amazonaws.com/clickhouse-test-reports/32291/2f43445a34c91beb1104422308ba5abfdf7be5bb/fuzzer_astfuzzertsan,actions//report.html)
**How to reproduce**
```SQL
CREATE TABLE t1 (`id` Int32, `key` String, `key2` String) ENGINE = TinyLog;
CREATE TABLE t2 (`id` Int32, `key` String, `key2` String) ENGINE = TinyLog;
SELECT (t1.id = t2.id) AND (t2.key = t2.key2) AND ((t1.id = t2.id) AND (t2.key = t2.key2) AND (t1.key = t1.key2) AND (t2.key2 = NULL)) AND (t1.key = t1.key2) AND (t2.key2 = NULL), NULL FROM t1 ANY INNER JOIN t2 ON (t1.key = t1
.key2) AND (NULL = t1.key) AND (t2.key = t2.key2) AND ((NULL = t1.key) = t2.id) AND (('' = t1.key) = t2.id) AND (t2.key2 = NULL);
```
| https://github.com/ClickHouse/ClickHouse/issues/32458 | https://github.com/ClickHouse/ClickHouse/pull/32508 | 37837f3881ed76003ff1677ce827e50982771d21 | 17e5f5ccfe85f4347f98bbd727736528530b35be | "2021-12-09T13:58:39Z" | c++ | "2021-12-10T22:21:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,452 | ["src/Storages/StorageGenerateRandom.cpp", "tests/queries/0_stateless/01087_table_function_generate.reference", "tests/queries/0_stateless/01087_table_function_generate.sql"] | genarateRandom() does not support Date32 type | generateRandom() does not support generating random Date32 value:
```
SELECT *
FROM generateRandom('Date32 Date32', NULL, 10, 2)
LIMIT 10
Received exception from server (version 21.11.5):
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: The 'GenerateRandom' is not implemented for type Date32: While executing GenerateRandom. (NOT_IMPLEMENTED)
```
| https://github.com/ClickHouse/ClickHouse/issues/32452 | https://github.com/ClickHouse/ClickHouse/pull/32643 | fd9d40925a2ba0b0ea63a3ebbb316984fe14ca10 | 9eb2a4fe902c9effc0e41ce4464a1827586df046 | "2021-12-09T11:47:47Z" | c++ | "2021-12-13T17:38:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,443 | ["docker/test/fuzzer/run-fuzzer.sh"] | ASTFuzzer checkouts wrong code | https://s3.amazonaws.com/clickhouse-test-reports/0/3498e13551d7fffdff3079123f199e2658019159/fuzzer_astfuzzerdebug,actions//report.html
```
2021.12.08 11:31:24.518138 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Debug> executeQuery: (from [::ffff:127.0.0.1]:35412) SELECT * FROM t_materialize_column ORDER BY i ASC
2021.12.08 11:31:24.519935 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Trace> ContextAccess (default): Access granted: SELECT(i, s) ON default.t_materialize_column
2021.12.08 11:31:24.520372 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.12.08 11:31:24.521117 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Debug> default.t_materialize_column (cbf10931-5166-47b6-8bf1-0931516657b6) (SelectExecutor): Key condition: unknown
2021.12.08 11:31:24.521789 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Debug> default.t_materialize_column (cbf10931-5166-47b6-8bf1-0931516657b6) (SelectExecutor): MinMax index condition: unknown
2021.12.08 11:31:24.522595 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Debug> default.t_materialize_column (cbf10931-5166-47b6-8bf1-0931516657b6) (SelectExecutor): Selected 2/2 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges
2021.12.08 11:31:24.523168 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Debug> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 2_3_3_0_4, approx. 1 rows starting from 0
2021.12.08 11:31:24.523508 [ 120 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Debug> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 1_1_1_0_4, approx. 1 rows starting from 0
2021.12.08 11:31:24.527267 [ 314 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Fatal> : Logical error: 'Got empty stream for SerializationLowCardinality keys.'.
2021.12.08 11:31:24.528492 [ 315 ] {} <Fatal> BaseDaemon: (version 21.12.1.10126, build id: 7B038C97E0E0AE37) (from thread 314) (query_id: d7e821f6-7782-49a7-a536-a83672fc14d8) Received signal Aborted (6)
2021.12.08 11:31:34.598280 [ 312 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Trace> PipelineExecutor: Thread finished. Total time: 10.072176655 sec. Execution time: 0.001538882 sec. Processing time: 0.000106508 sec. Wait time: 10.070531265 sec.
```
```
2021.12.08 11:31:24.527267 [ 314 ] {d7e821f6-7782-49a7-a536-a83672fc14d8} <Fatal> : Logical error: 'Got empty stream for SerializationLowCardinality keys.'.
2021.12.08 11:31:24.528282 [ 315 ] {} <Fatal> BaseDaemon: ########################################
2021.12.08 11:31:24.528492 [ 315 ] {} <Fatal> BaseDaemon: (version 21.12.1.10126, build id: 7B038C97E0E0AE37) (from thread 314) (query_id: d7e821f6-7782-49a7-a536-a83672fc14d8) Received signal Aborted (6)
2021.12.08 11:31:24.528619 [ 315 ] {} <Fatal> BaseDaemon:
2021.12.08 11:31:24.528768 [ 315 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f8b24dd818b 0x7f8b24db7859 0x155ae6b8 0x155ae7c2 0x21a59687 0x232913bb 0x2329090f 0x23b1eeac 0x23b1f17d 0x23b1fc45 0x23b236ae 0x23b2282a 0x23b1a19a 0x23b1ab6f 0x23b19b8b 0x23621110 0x23620ea4 0x23a3bbe2 0x2365b399 0x2365b295 0x2363f001 0x2363f2d7 0x23640a7b 0x236409dd 0x23640981 0x23640892 0x2364075b 0x2364061d 0x236405dd 0x236405b5 0x23640580 0x155fbde6 0x155faef5 0x156267ef 0x1562d9a4 0x1562d91d 0x1562d845 0x1562d182 0x7f8b24f8d609 0x7f8b24eb4293
2021.12.08 11:31:24.529047 [ 315 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7f8b24dd818b in ?
2021.12.08 11:31:24.529157 [ 315 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7f8b24db7859 in ?
2021.12.08 11:31:24.603626 [ 315 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:51: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x155ae6b8 in /workspace/clickhouse
2021.12.08 11:31:24.664501 [ 315 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:58: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x155ae7c2 in /workspace/clickhouse
2021.12.08 11:31:24.773807 [ 315 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/DataTypes/Serializations/SerializationLowCardinality.cpp:618: DB::SerializationLowCardinality::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x21a59687 in /workspace/clickhouse
2021.12.08 11:31:24.909706 [ 315 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeReaderWide.cpp:283: DB::MergeTreeReaderWide::readData(DB::NameAndTypePair const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, bool, unsigned long, unsigned long, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >&, bool) @ 0x232913bb in /workspace/clickhouse
2021.12.08 11:31:25.055534 [ 315 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeReaderWide.cpp:116: DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x2329090f in /workspace/clickhouse
2021.12.08 11:31:25.211253 [ 315 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeRangeReader.cpp:88: DB::MergeTreeRangeReader::DelayedStream::readRows(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&, unsigned long) @ 0x23b1eeac in /workspace/clickhouse
2021.12.08 11:31:25.366017 [ 315 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeRangeReader.cpp:162: DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x23b1f17d in /workspace/clickhouse
2021.12.08 11:31:25.520849 [ 315 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeRangeReader.cpp:273: DB::MergeTreeRangeReader::Stream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x23b1fc45 in /workspace/clickhouse
2021.12.08 11:31:25.678445 [ 315 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeRangeReader.cpp:799: DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x23b236ae in /workspace/clickhouse
2021.12.08 11:31:25.835509 [ 315 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeRangeReader.cpp:726: DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x23b2282a in /workspace/clickhouse
2021.12.08 11:31:26.033598 [ 315 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp:173: DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x23b1a19a in /workspace/clickhouse
2021.12.08 11:31:26.228720 [ 315 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp:218: DB::MergeTreeBaseSelectProcessor::readFromPart() @ 0x23b1ab6f in /workspace/clickhouse
2021.12.08 11:31:26.422478 [ 315 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp:81: DB::MergeTreeBaseSelectProcessor::generate() @ 0x23b19b8b in /workspace/clickhouse
2021.12.08 11:31:26.531674 [ 315 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../src/Processors/ISource.cpp:79: DB::ISource::tryGenerate() @ 0x23621110 in /workspace/clickhouse
2021.12.08 11:31:26.640559 [ 315 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../src/Processors/ISource.cpp:53: DB::ISource::work() @ 0x23620ea4 in /workspace/clickhouse
2021.12.08 11:31:26.747996 [ 315 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../src/Processors/Sources/SourceWithProgress.cpp:65: DB::SourceWithProgress::work() @ 0x23a3bbe2 in /workspace/clickhouse
2021.12.08 11:31:26.827460 [ 315 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:45: DB::executeJob(DB::IProcessor*) @ 0x2365b399 in /workspace/clickhouse
2021.12.08 11:31:26.902450 [ 315 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../src/Processors/Executors/ExecutionThreadContext.cpp:63: DB::ExecutionThreadContext::executeTask() @ 0x2365b295 in /workspace/clickhouse
2021.12.08 11:31:27.043040 [ 315 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:213: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x2363f001 in /workspace/clickhouse
2021.12.08 11:31:27.180810 [ 315 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:178: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x2363f2d7 in /workspace/clickhouse
2021.12.08 11:31:27.313115 [ 315 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:311: DB::PipelineExecutor::executeImpl(unsigned long)::$_1::operator()() const @ 0x23640a7b in /workspace/clickhouse
2021.12.08 11:31:27.464940 [ 315 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&) @ 0x236409dd in /workspace/clickhouse
2021.12.08 11:31:27.616744 [ 315 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x23640981 in /workspace/clickhouse
2021.12.08 11:31:27.768447 [ 315 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&, std::__1::tuple<>&) @ 0x23640892 in /workspace/clickhouse
2021.12.08 11:31:27.899908 [ 315 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:188: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()::operator()() @ 0x2364075b in /workspace/clickhouse
2021.12.08 11:31:28.054350 [ 315 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&) @ 0x2364061d in /workspace/clickhouse
2021.12.08 11:31:28.202907 [ 315 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'()&>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&...) @ 0x236405dd in /workspace/clickhouse
2021.12.08 11:31:28.351191 [ 315 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'(), void ()>::operator()() @ 0x236405b5 in /workspace/clickhouse
2021.12.08 11:31:28.499172 [ 315 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x23640580 in /workspace/clickhouse
2021.12.08 11:31:28.542110 [ 315 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x155fbde6 in /workspace/clickhouse
2021.12.08 11:31:28.582516 [ 315 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x155faef5 in /workspace/clickhouse
2021.12.08 11:31:28.640416 [ 315 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:274: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x156267ef in /workspace/clickhouse
2021.12.08 11:31:28.711702 [ 315 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:139: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x1562d9a4 in /workspace/clickhouse
2021.12.08 11:31:28.786969 [ 315 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x1562d91d in /workspace/clickhouse
2021.12.08 11:31:28.862079 [ 315 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x1562d845 in /workspace/clickhouse
2021.12.08 11:31:28.937126 [ 315 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x1562d182 in /workspace/clickhouse
2021.12.08 11:31:28.937423 [ 315 ] {} <Fatal> BaseDaemon: 42. ? @ 0x7f8b24f8d609 in ?
2021.12.08 11:31:28.937643 [ 315 ] {} <Fatal> BaseDaemon: 43. clone @ 0x7f8b24eb4293 in ?
2021.12.08 11:31:30.047705 [ 315 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 3A9BFA9BB160F0E6969D2216C57345B8. There is no information about the reference checksum.
```
| https://github.com/ClickHouse/ClickHouse/issues/32443 | https://github.com/ClickHouse/ClickHouse/pull/32447 | 2eb3dc83a5d61ae9636c719099c402f0c6435dee | a641eff47019c9be3543f24debb906bf6cd429c3 | "2021-12-09T09:50:25Z" | c++ | "2021-12-09T13:15:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,401 | ["src/Processors/Formats/Impl/ArrowColumnToCHColumn.cpp", "tests/queries/0_stateless/02403_arrow_large_string.reference", "tests/queries/0_stateless/02403_arrow_large_string.sh"] | Support large_utf8 column format on parquet | **Describe the unexpected behaviour**
Clickhouse does not support parquet with 'large_utf8' columns (only 'utf8'). From my understanding, the only difference is offsets are u64 in 'large_utf8' instead of u32
**How to reproduce**
```
import pyarrow
import pyarrow.parquet
import subprocess
for schema in (None,pyarrow.schema({"a": pyarrow.large_utf8()})):
a = pyarrow.table({"a": ["00000"]}, schema=schema)
pyarrow.parquet.write_table(a, "test.parquet")
subprocess.run(
"""cat test.parquet | clickhouse local \
--input-format "Parquet" \
--structure "a String" \
--query "select * from table"\
""",
shell=True,
check=True,
)
```
gives
```
00000
Code: 50. DB::Exception: Unsupported Parquet type 'large_utf8' of an input column 'a'.: While executing ParquetBlockInputFormat: While executing File. (UNKNOWN_TYPE)
```
| https://github.com/ClickHouse/ClickHouse/issues/32401 | https://github.com/ClickHouse/ClickHouse/pull/40293 | 5a85531ef760076214f246ab7ceea35ec293c60a | 09a2ff88435f79e5279745bbe1dc0e5e401df38d | "2021-12-08T14:07:50Z" | c++ | "2022-08-18T12:01:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,391 | ["src/IO/WriteHelpers.cpp", "src/Interpreters/ClusterProxy/executeQuery.h", "src/Processors/QueryPlan/ReadFromRemote.cpp", "src/Storages/StorageDistributed.cpp", "tests/queries/0_stateless/01455_opentelemetry_distributed.reference", "tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference", "tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql", "tests/queries/0_stateless/01999_grant_with_replace.reference", "tests/queries/0_stateless/02133_distributed_queries_formatting.reference", "tests/queries/0_stateless/02133_distributed_queries_formatting.sql"] | Syntax error when column named ALL or DISTINCT in distributed query | **Describe what's wrong**
If a column named `ALL` or `DISTINCT` appears in the first position of the select list in a distributed query, the query fails with `Syntax error`.
In the rewriten query the column name appears without quotation marks, so remote server interprets `DISTINCT` as part of `SELECT DISTINCT` instead of column name. Same thing applies to column named `ALL`.
**Does it reproduce on recent release?**
Yes, It reproduces on 21.11.5 official build
**How to reproduce**
* Which ClickHouse server version to use
* Which interface to use, if matters
* Non-default settings, if any
```sql
SET prefer_localhost_replica=0
```
* `CREATE TABLE` statements for all tables involved
```sql
CREATE TABLE t0 ("KEY" Int64, "ALL" Int64, "DISTINCT" Int64) ENGINE = MergeTree() ORDER BY KEY
CREATE TABLE dist_t0 ("KEY" Int64, "ALL" Int64, "DISTINCT" Int64) ENGINE = Distributed(test_shard_localhost, default, t0)
```
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
```sql
:) select "DISTINCT" from dist_t0
SELECT DISTINCT
FROM dist_t0
Query id: d9024cf2-777e-4f5e-a7fa-f9d2eb8dc661
0 rows in set. Elapsed: 0.013 sec.
Received exception from server (version 21.11.5):
Code: 62. DB::Exception: Received from localhost:9000. DB::Exception: Received from localhost:9000. DB::Exception: Syntax error: failed at position 22 ('default'): default.t0. Expected one of: UNION, LIMIT, WHERE, WINDOW, DoubleColon, LIKE, GLOBAL NOT IN, end of query, HAVING, AS, DIV, IS, UUID, GROUP BY, INTO OUTFILE, OR, EXCEPT, QuestionMark, OFFSET, BETWEEN, NOT LIKE, MOD, PREWHERE, AND, Comma, alias, ORDER BY, SETTINGS, IN, ILIKE, INTERSECT, FROM, FORMAT, Dot, NOT ILIKE, WITH, NOT, Arrow, token, NOT IN, GLOBAL IN. (SYNTAX_ERROR)
```
| https://github.com/ClickHouse/ClickHouse/issues/32391 | https://github.com/ClickHouse/ClickHouse/pull/32490 | 2b5409120db19c12724c917541536f2814ab6c43 | 52328f6abc2425a960c1bff314b2932e91e5dbf8 | "2021-12-08T12:41:23Z" | c++ | "2021-12-13T13:41:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,374 | ["src/Parsers/ASTFunction.cpp", "tests/queries/0_stateless/02183_array_tuple_literals_remote.reference", "tests/queries/0_stateless/02183_array_tuple_literals_remote.sql"] | Exception when use function array in function multiIf | **Describe what's wrong**
> Using the function 'array' in the function 'multiIf' will get an Exception, but it can be executed normally using [] to replace the function 'array'.
**How to reproduce**
* Which ClickHouse server version to use
```
Based on the latest version:
VERSION_DESCRIBE: v21.12.1.1-prestable
commit: 514120adfefd8836b8e0c9c6f89e878d5faf883e
```
* `CREATE TABLE` statements for all tables involved
```
# Create distributed tables and local tables on a cluster with two shards (test_2_0).
CREATE TABLE default.test_array (`x` Nullable(String)) ENGINE = MergeTree ORDER BY tuple() SETTINGS index_granularity = 8192;
CREATE TABLE default.test_array_all (`x` Nullable(String)) ENGINE = Distributed(test_2_0, default, test_array);
```
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
```
# Insert data into two local tables.
INSERT INTO default.test_array values (null, '');
```
* Queries to run that lead to unexpected result
```sql
# Exception SQL:
SELECT multiIf(x='a', array('a'), array()) FROM test_array_all
# Normally SQL
SELECT multiIf(x = 'a', ['a'], []) FROM test_array_all
```
**Expected behavior**
> Non Exception.
**Error message and/or stacktrace**
```
Received exception from server (version 21.12.1):
Code: 10. DB::Exception: Received from localhost:9020. DB::Exception: Not found column multiIf(equals(x, 'a'), array('a'), array()) inblock. There are only columns: multiIf(equals(x, 'a'), ['a'], array()): While executing Remote. Stack trace:
0. ./build/../contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x143bb14c in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
1. ./build/../src/Common/Exception.cpp:57: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9e972fa in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
2. ./build/../src/Core/Block.cpp:0: DB::Block::getByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0x10e681d7 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
3. ./build/../src/QueryPipeline/RemoteQueryExecutor.cpp:181: DB::RemoteQueryExecutor::processPacket(DB::Packet) @ 0x1102d49c in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
4. ./build/../contrib/libcxx/include/vector:463: DB::RemoteQueryExecutor::read(std::__1::unique_ptr<DB::RemoteQueryExecutorReadContext, std::__1::default_delete<DB::RemoteQueryExecutorReadContext> >&) @ 0x1102e588 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
5. ./build/../contrib/libcxx/include/variant:700: DB::RemoteSource::tryGenerate() @ 0x123271df in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
6. ./build/../contrib/libcxx/include/optional:295: DB::ISource::work() @ 0x121768ba in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
7. ./build/../src/Processors/Sources/SourceWithProgress.cpp:67: DB::SourceWithProgress::work() @ 0x1232a316 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
8. ./build/../src/Processors/Executors/ExecutionThreadContext.cpp:65: DB::ExecutionThreadContext::executeTask() @ 0x1218cd63 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
9. ./build/../src/Processors/Executors/PipelineExecutor.cpp:213: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x121888e0 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
10. ./build/../contrib/libcxx/include/memory:2851: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_1>(DB::PipelineExecutor::executeImpl(unsigned long)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x121899e8 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
11. ./build/../contrib/libcxx/include/functional:2210: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9ecb530 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
12. ./build/../contrib/libcxx/include/memory:1655: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x9ecd933 in /data01/yuanquan/community_clickhouse/build/programs/clickhouse
13. start_thread @ 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
14. __clone @ 0xe8d0f in /lib/x86_64-linux-gnu/libc-2.24.so
. (NOT_FOUND_COLUMN_IN_BLOCK)
``` | https://github.com/ClickHouse/ClickHouse/issues/32374 | https://github.com/ClickHouse/ClickHouse/pull/33938 | cd2305eb57d02a0ef17a2ce0f39d905b83d8a080 | ea55c9a0ae207d9f674453aa6ac9508298b8e41c | "2021-12-08T09:10:11Z" | c++ | "2022-01-27T19:58:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,363 | ["docs/en/operations/clickhouse-keeper.md", "src/Coordination/KeeperServer.cpp"] | It is possible to intentionally misconfigure ClickHouse Keeper. If election_timeout_lower_bound_ms > election_timeout_upper_bound_ms breaks leader election. | **Unexpected behavior**
In ClickHouse Keeper config
`<keeper_server>
<coordination_settings>`
when election_timeout_lower_bound_ms > election_timeout_upper_bound_ms seems that leader election doesn`t work. But if we watch the description from documentation:
election_timeout_lower_bound_ms — If the follower didn't receive heartbeats from the leader in this interval, then it **can** initiate leader election (default: 1000).
election_timeout_upper_bound_ms — If the follower didn't receive heartbeats from the leader in this interval, then it **must** initiate leader election (default: 2000).
**Expected behavior**
Initiate leader election tusing election_timeout_upper_bound_ms as **must** initiator or maybe some warning of incorrect election_timeout_lower_bound_ms value.
| https://github.com/ClickHouse/ClickHouse/issues/32363 | https://github.com/ClickHouse/ClickHouse/pull/46274 | 47cd5f8e9d3a4d2a271f9cdd95baf846fe9e90dd | 9bef1bec2860ae507ff1377388efb7b4c898e892 | "2021-12-08T05:29:16Z" | c++ | "2023-02-14T11:44:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,305 | ["src/DataTypes/DataTypeTuple.cpp", "tests/queries/0_stateless/02286_tuple_numeric_identifier.reference", "tests/queries/0_stateless/02286_tuple_numeric_identifier.sql"] | JSONExtract Tuple name can't start from digit. | **Describe what's wrong**
It's not possible to use digit as tuple name, so it's not possible to parse JSON files to nested tuple structure with JSONExtract.
**Does it reproduce on recent release?**
Yes
ClickHouse 21.12
**How to reproduce**
```
WITH
'{"1":{"key":"value"}}' AS data,
JSONExtract(data, 'Tuple("1" Tuple(key String))') AS parsed_json
SELECT parsed_json AS ssid
Query id: 5bc7b6cf-155a-421c-ac20-f3be09bd9692
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.12.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Explicitly specified names of tuple elements cannot start with digit: While processing JSONExtract('{"1":{"key":"value"}}' AS data, 'Tuple("1" Tuple(key String))') AS ssid. (BAD_ARGUMENTS)
```
**Expected behavior**
It's possible to parse JSON with digit keys.
| https://github.com/ClickHouse/ClickHouse/issues/32305 | https://github.com/ClickHouse/ClickHouse/pull/36544 | bd1f12e5d50595728e3c6078f3ed73eaac696dfa | 17bb7f175b8f5b3ed7ff90ed552c40a85133e696 | "2021-12-06T18:55:26Z" | c++ | "2022-04-26T10:22:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,292 | ["src/Storages/MergeTree/MergeTreeIndices.h", "tests/queries/0_stateless/02131_skip_index_not_materialized.reference", "tests/queries/0_stateless/02131_skip_index_not_materialized.sql"] | Query that uses skip index fails if index is not materialized | ```sql
CREATE TABLE t ( a UInt32) ENGINE = MergeTree ORDER BY tuple();
INSERT INTO t VALUES (1);
ALTER TABLE t ADD INDEX ind (a) TYPE set(1) GRANULARITY 1;
SELECT count() FROM t WHERE a = 1;
Received exception from server (version 21.11.5):
Code: 1001. DB::Exception: Received from localhost:9000. DB::Exception: std::__1::__fs::filesystem::filesystem_error: filesystem error: in file_size: No such file or directory [./store/402/402d2274-1310-43f7-802d-22741310f3f7/all_1_1_0_2/skp_idx_ind.mrk3]. (STD_EXCEPTION)
```
**Does it reproduce on recent release?**
Reproduces on 21.11 and master.
**Expected behavior**
Index should be skipped if it doesn't exist in part.
> Add any other context about the problem here.
Most likely introduced in #27250.
| https://github.com/ClickHouse/ClickHouse/issues/32292 | https://github.com/ClickHouse/ClickHouse/pull/32359 | 66e1fb7adad8ce28af4c9cf126f704bdefafa746 | a241103714422b775852790633451167433125c1 | "2021-12-06T15:54:31Z" | c++ | "2021-12-13T11:43:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,207 | ["src/Interpreters/InterpreterInsertQuery.cpp", "src/QueryPipeline/Chain.h", "tests/queries/0_stateless/02137_mv_into_join.reference", "tests/queries/0_stateless/02137_mv_into_join.sql"] | [21.11+] LOGICAL_ERROR on MV into JOIN tables: Context has expired | Sort repro:
```sql
CREATE TABLE a ( `id` String, `color` String, `section` String, `description` String) ENGINE = MergeTree ORDER BY tuple()
CREATE TABLE b ( `key` String, `id` String, `color` String, `section` String, `description` String) ENGINE = Join(ANY, LEFT, key)
CREATE MATERIALIZED VIEW c TO `b` AS SELECT concat(id, '_', color) AS key, * FROM a
INSERT INTO a VALUES ('sku_0001','black','women','nice shirt')
```
Versions:
* 21.10.4.26 OK
* 21.11.2.2: KO
* 21.11.5.33: KO
* 21.12.1.9879 (Today's master): KO
Backtrace:
* 21.11:
```
2021.12.03 16:12:25.250703 [ 764841 ] {e404297a-1a02-46ef-9526-b6b9865f5c61} <Error> TCPHandler: Code: 49. DB::Exception: Context has expired: while pushing to view d2.c (44692c40-db1f-4541-8469-2c40db1f0541). (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9b66fd4 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
1. DB::WithContextImpl<std::__1::shared_ptr<DB::Context const> >::getContext() const @ 0xc4aca06 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
2. DB::SetOrJoinSink::consume(DB::Chunk) @ 0x12b37d36 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
3. DB::SinkToStorage::transform(DB::Chunk&) @ 0x13326044 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
4. ? @ 0x132a6649 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
5. DB::ExceptionKeepingTransform::work() @ 0x132a613c in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
6. ? @ 0x131348bb in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
7. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x13130871 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
8. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x1312f085 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
9. DB::TCPHandler::processInsertQuery() @ 0x130d5e46 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
10. DB::TCPHandler::runImpl() @ 0x130ce9b7 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
11. DB::TCPHandler::run() @ 0x130e2239 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
12. Poco::Net::TCPServerConnection::start() @ 0x15d1d16f in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
13. Poco::Net::TCPServerDispatcher::run() @ 0x15d1f561 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
14. Poco::PooledThread::run() @ 0x15e33f09 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
15. Poco::ThreadImpl::runnableEntry(void*) @ 0x15e31640 in /mnt/ch/official_binaries/clickhouse-common-static-21.11.2.2/usr/bin/clickhouse
16. start_thread @ 0x9259 in /usr/lib/libpthread-2.33.so
17. __GI___clone @ 0xfe5e3 in /usr/lib/libc-2.33.so
```
Master:
```
2021.12.03 16:14:15.653601 [ 766370 ] {f1b6b8e4-5c94-40cd-b26a-11b438e15c0a} <Error> TCPHandler: Code: 49. DB::Exception: Context has expired: while pushing to view d4.c (1618345e-1662-44e4-9618-345e166274e4). (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa217b1a in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
1. DB::WithContextImpl<std::__1::shared_ptr<DB::Context const> >::getContext() const @ 0xcc2a046 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
2. DB::SetOrJoinSink::consume(DB::Chunk) @ 0x138503f6 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
3. DB::SinkToStorage::transform(DB::Chunk&) @ 0x141f477d in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
4. ? @ 0x14172309 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
5. DB::ExceptionKeepingTransform::work() @ 0x14171dfc in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
6. DB::ExecutionThreadContext::executeTask() @ 0x14003dc3 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
7. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x13ff806f in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
8. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x13ff7de0 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
9. DB::TCPHandler::processInsertQuery() @ 0x13fa0720 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
10. DB::TCPHandler::runImpl() @ 0x13f999dd in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
11. DB::TCPHandler::run() @ 0x13fad019 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
12. Poco::Net::TCPServerConnection::start() @ 0x16efeccf in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
13. Poco::Net::TCPServerDispatcher::run() @ 0x16f01121 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
14. Poco::PooledThread::run() @ 0x1700fea9 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
15. Poco::ThreadImpl::runnableEntry(void*) @ 0x1700d5a0 in /mnt/ch/official_binaries/clickhouse-common-static-21.12.1.9879/usr/bin/clickhouse
16. ? @ 0x7f1b42c8b259 in ?
17. clone @ 0x7f1b42bb45e3 in ?
```
Note that in a longer repro (with multiple MV writting to multiple join tables) I've also gotten a `UNKNOWN_TABLE` eventhough the table exists and it's available. | https://github.com/ClickHouse/ClickHouse/issues/32207 | https://github.com/ClickHouse/ClickHouse/pull/32669 | c7a5fb758247bd35d9731c48a5232ae28d27fd5b | 6083869b5d032a94128882f7e78616e764f9349f | "2021-12-03T15:16:07Z" | c++ | "2021-12-15T23:01:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,107 | ["src/AggregateFunctions/QuantileTDigest.h", "tests/queries/0_stateless/02286_quantile_tdigest_infinity.reference", "tests/queries/0_stateless/02286_quantile_tdigest_infinity.sql"] | Error with AggregateFunction quantilesTDigest causing part merges to fail | **Describe what's wrong**
We had an issue with `INSERT`s into ClickHouse throwing an error:
```
2021-11-29 18:26:13.177 ESTError message from worker: ru.yandex.clickhouse.except.ClickHouseUnknownException: ClickHouse exception, code: 1002, host: <clickhouse_host>, port: 8123; Code: 252. DB::Exception: Too many parts (301). Merges are processing significantly slower than inserts: while pushing to view default.metrics_shard_stat_10m_view (aa99702d-bd2c-4aca-aa99-702dbd2c6aca): while pushing to view default.metrics_shard_stat_1m_view (c2e4400c-c18e-46ae-82e4-400cc18ef6ae). (TOO_MANY_PARTS) (version 21.11.3.6 (official build))
```
Upon investigation it appeared that there were a large amount of unmerged parts for the `metrics_shard_stat_10m` table. This seems to be due to this error thrown during a `deserialize` of an `AggregateFunction` `quantilesTDigest()`. This was also causing the node to have a very high CPU usage.
**Does it reproduce on recent release?**
We're using ClickHouse server version ` 21.11.3.6`
**Table Schema**
`metrics_shard_stat_1m`:
```
CREATE TABLE default.metrics_shard_stat_1m ON CLUSTER testcluster
(
metric_id UUID,
date Date DEFAULT toDate(bucket) CODEC (DoubleDelta),
bucket DateTime CODEC (DoubleDelta, LZ4),
s_avg AggregateFunction(avg, Float64),
s_count AggregateFunction(count, Float64),
sum SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
sum2 SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_linear_regression AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance AggregateFunction(varSamp, Float64),
s_quantiles AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64),
s_avg_nozero AggregateFunction(avg, Float64),
s_count_nozero AggregateFunction(count, Float64),
sum_nozero SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min_nozero SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max_nozero SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_linear_regression_nozero AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance_nozero AggregateFunction(varSamp, Float64),
s_quantiles_nozero AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64)
) ENGINE = ReplicatedAggregatingMergeTree() PARTITION BY toYYYYMM(bucket)
ORDER BY (metric_id, date, bucket) SETTINGS index_granularity = 64;
CREATE MATERIALIZED VIEW default.metrics_shard_stat_1m_view ON CLUSTER testcluster TO default.metrics_shard_stat_1m AS
SELECT metric_id,
date,
toStartOfMinute(toDateTime64(timestamp, 3)) as bucket,
avgState(value) as s_avg,
countState(value) as s_count,
sum(value) as sum,
sum(value * value) as sum2,
min(value) as min,
max(value) as max,
any(value) as first,
anyLast(value) as last,
quantilesTDigestState(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99)(value) as s_quantiles,
simpleLinearRegressionState(toUnixTimestamp(timestamp), value) as s_linear_regression,
varSampState(value) as s_variance,
avgStateIf(value, value > 0) as s_avg_nozero,
countStateIf(value, value > 0) as s_count_nozero,
sumIf(value, value > 0) as sum_nozero,
minIf(value, value > 0) as min_nozero,
maxIf(value, value > 0) as max_nozero,
anyIf(value, value > 0) as first_nozero,
anyLastIf(value, value > 0) as last_nozero,
quantilesTDigestStateIf(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99)(value, value > 0) as s_quantiles_nozero,
simpleLinearRegressionStateIf(toUnixTimestamp(timestamp), value, value >
0) as s_linear_regression_nozero,
varSampStateIf(value, value > 0) as s_variance_nozero
FROM default.metrics_shard
GROUP BY metric_id,
date,
bucket;
CREATE TABLE default.metrics_stat_1m ON CLUSTER testcluster
(
metric_id UUID,
date Date DEFAULT toDate(bucket) CODEC (DoubleDelta),
bucket DateTime CODEC (DoubleDelta, LZ4),
s_avg AggregateFunction(avg, Float64),
s_count AggregateFunction(count, Float64),
sum SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
sum2 SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_quantiles AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64),
s_linear_regression AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance AggregateFunction(varSamp, Float64),
s_avg_nozero AggregateFunction(avg, Float64),
s_count_nozero AggregateFunction(count, Float64),
sum_nozero SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min_nozero SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max_nozero SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_quantiles_nozero AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64),
s_linear_regression_nozero AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance_nozero AggregateFunction(varSamp, Float64)
) ENGINE = Distributed(
testcluster,
default,
metrics_shard_stat_1m,
farmFingerprint64(metric_id)
);
```
`metrics_shard_stat_10m`:
```
CREATE TABLE default.metrics_shard_stat_10m ON cluster testcluster
(
metric_id UUID,
date Date DEFAULT toDate(bucket) CODEC (DoubleDelta),
bucket DateTime CODEC (DoubleDelta, LZ4),
s_avg AggregateFunction(avg, Float64),
s_count AggregateFunction(count, Float64),
sum SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
sum2 SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_linear_regression AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance AggregateFunction(varSamp, Float64),
s_quantiles AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64),
s_avg_nozero AggregateFunction(avg, Float64),
s_count_nozero AggregateFunction(count, Float64),
sum_nozero SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min_nozero SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max_nozero SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_linear_regression_nozero AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance_nozero AggregateFunction(varSamp, Float64),
s_quantiles_nozero AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64)
) ENGINE = ReplicatedAggregatingMergeTree() PARTITION BY toYYYYMM(bucket)
ORDER BY (metric_id, date, bucket) SETTINGS index_granularity = 64;
CREATE MATERIALIZED VIEW default.metrics_shard_stat_10m_view ON CLUSTER testcluster TO default.metrics_shard_stat_10m AS
SELECT metric_id,
date,
toStartOfTenMinutes(toDateTime64(bucket, 3)) as bucket,
avgMergeState(s_avg) as s_avg,
countMergeState(s_count) as s_count,
sum(sum) as sum,
sum(sum2) as sum2,
min(min) as min,
max(max) as max,
any(first) as first,
anyLast(last) as last,
quantilesTDigestMergeState(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99)(s_quantiles) as s_quantiles,
simpleLinearRegressionMergeState(s_linear_regression) as s_linear_regression,
varSampMergeState(s_variance) as s_variance,
avgMergeState(s_avg_nozero) as s_avg_nozero,
countMergeState(s_count_nozero) as s_count_nozero,
sum(sum_nozero) as sum_nozero,
min(min_nozero) as min_nozero,
max(max_nozero) as max_nozero,
any(first_nozero) as first_nozero,
anyLast(last_nozero) as last_nozero,
quantilesTDigestMergeState(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99)(s_quantiles_nozero) as s_quantiles_nozero,
simpleLinearRegressionMergeState(s_linear_regression_nozero) as s_linear_regression_nozero,
varSampMergeState(s_variance_nozero) as s_variance_nozero
FROM default.metrics_shard_stat_1m
GROUP BY metric_id,
date,
bucket;
CREATE TABLE default.metrics_stat_10m ON CLUSTER testcluster
(
metric_id UUID,
date Date DEFAULT toDate(bucket) CODEC (DoubleDelta),
bucket DateTime CODEC (DoubleDelta, LZ4),
s_avg AggregateFunction(avg, Float64),
s_count AggregateFunction(count, Float64),
sum SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
sum2 SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_quantiles AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64),
s_linear_regression AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance AggregateFunction(varSamp, Float64),
s_avg_nozero AggregateFunction(avg, Float64),
s_count_nozero AggregateFunction(count, Float64),
sum_nozero SimpleAggregateFunction(sum, Float64) CODEC (Gorilla),
min_nozero SimpleAggregateFunction(min, Float64) CODEC (Gorilla),
max_nozero SimpleAggregateFunction(max, Float64) CODEC (Gorilla),
first_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
last_nozero SimpleAggregateFunction(any, Float64) CODEC (Gorilla),
s_quantiles_nozero AggregateFunction(quantilesTDigest(0.10, 0.25, 0.50, 0.75, 0.90, 0.95, 0.99),
Float64),
s_linear_regression_nozero AggregateFunction(simpleLinearRegression, UInt32, Float64),
s_variance_nozero AggregateFunction(varSamp, Float64)
) ENGINE = Distributed(
testcluster,
default,
metrics_shard_stat_10m,
farmFingerprint64(metric_id)
);
```
**Expected behavior**
`quantilesTDigest()` function should work or throw a meaningful without causing part merges to be blocked forever.
**Error message and/or stacktrace**
```
2021.12.01 03:58:30.375438 [ 62 ] {} <Error> void DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(DB::TaskRuntimeDataPtr) [Queue = DB::MergeMutateRuntimeQueue]: Code: 27. DB::Exception: Invalid centroid 2.000000:-nan: (while reading column s_quantiles): (while reading from part /var/lib/clickhouse/store/29e/29ec85cb-e183-4946-a9ec-85cbe1837946/202111_5921759_5923975_485/ from mark 5 with max_rows_to_read = 64): While executing MergeTreeSequentialSource. (CANNOT_PARSE_INPUT_ASSERTION_FAILED), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9b63054 in /usr/bin/clickhouse
1. DB::QuantileTDigest<double>::deserialize(DB::ReadBuffer&) @ 0xa51b433 in /usr/bin/clickhouse
2. DB::SerializationAggregateFunction::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0x11cd71fa in /usr/bin/clickhouse
3. DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x11cd4435 in /usr/bin/clickhouse
4. DB::MergeTreeReaderWide::readData(DB::NameAndTypePair const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, bool, unsigned long, unsigned long, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >&, bool) @ 0x12e854a2 in /usr/bin/clickhouse
5. DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x12e84279 in /usr/bin/clickhouse
6. DB::MergeTreeSequentialSource::generate() @ 0x12e88377 in /usr/bin/clickhouse
7. DB::ISource::tryGenerate() @ 0x131156b5 in /usr/bin/clickhouse
8. DB::ISource::work() @ 0x1311527a in /usr/bin/clickhouse
9. DB::SourceWithProgress::work() @ 0x13321062 in /usr/bin/clickhouse
10. ? @ 0x13130b1b in /usr/bin/clickhouse
11. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1312cad1 in /usr/bin/clickhouse
12. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x1312b2e5 in /usr/bin/clickhouse
13. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x13139d4b in /usr/bin/clickhouse
14. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x13139fec in /usr/bin/clickhouse
15. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x12d3d52b in /usr/bin/clickhouse
16. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x12d3d48b in /usr/bin/clickhouse
17. DB::MergeTask::execute() @ 0x12d4217a in /usr/bin/clickhouse
18. DB::MergePlainMergeTreeTask::executeStep() @ 0x12fbceec in /usr/bin/clickhouse
19. DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(std::__1::shared_ptr<DB::TaskRuntimeData>) @ 0x12d50bdd in /usr/bin/clickhouse
20. DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::threadFunction() @ 0x12d5167a in /usr/bin/clickhouse
21. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x9ba7d0a in /usr/bin/clickhouse
22. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x9ba9b27 in /usr/bin/clickhouse
23. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9ba5117 in /usr/bin/clickhouse
24. ? @ 0x9ba8b1d in /usr/bin/clickhouse
25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
26. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.11.3.6 (official build))
```
**System Settings**
```
connect_timeout_with_failover_ms,1000,1,Connection timeout for selecting first healthy replica.,,,0,Milliseconds
load_balancing,random,1,Which replicas (among healthy replicas) to preferably send a query to (on the first attempt) for distributed processing.,,,0,LoadBalancing
distributed_aggregation_memory_efficient,1,1,Is the memory-saving mode of distributed aggregation enabled.,,,0,Bool
log_queries,1,1,Log requests and write the log to the system table.,,,0,Bool
max_memory_usage,10000000000,1,Maximum memory usage for processing of single query. Zero means unlimited.,,,0,UInt64
parallel_view_processing,1,1,Enables pushing to attached views concurrently instead of sequentially.,,,0,Bool
default_database_engine,Ordinary,1,Default database engine.,,,0,DefaultDatabaseEngine
```
| https://github.com/ClickHouse/ClickHouse/issues/32107 | https://github.com/ClickHouse/ClickHouse/pull/37021 | ed7df7cabd5a2862085e36986305d98b863c38a8 | 5e34f48a181744a9f9241e3da0522eeaf9c68b84 | "2021-12-01T22:44:49Z" | c++ | "2022-05-16T13:21:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 32,053 | ["src/AggregateFunctions/AggregateFunctionAvg.h", "src/AggregateFunctions/AggregateFunctionAvgWeighted.cpp", "tests/queries/0_stateless/01668_avg_weighted_ubsan.reference", "tests/queries/0_stateless/01668_avg_weighted_ubsan.sql"] | Crash in avgWeighted with Decimal | **Describe what's wrong**
Crash with avgWeighted + Decimal in window function.
**Does it reproduce on recent release?**
Yes, 21.8-21.12
**How to reproduce**
```
SELECT avgWeighted(a, toDecimal64(c, 9)) OVER (PARTITION BY c)
FROM
(
SELECT
number AS a,
number AS c
FROM numbers(10)
)
[LAPTOP-] 2021.12.01 15:41:35.721898 [ 8782 ] <Fatal> BaseDaemon: ########################################
[LAPTOP-] 2021.12.01 15:41:35.722020 [ 8782 ] <Fatal> BaseDaemon: (version 21.12.1.8922 (official build), build id: 0C2586E3E51E2F10) (from thread 8753) (query_id: 40d256f5-821d-4839-9510-d9477f430139) Received signal Arithmetic exception (8)
[LAPTOP-] 2021.12.01 15:41:35.722056 [ 8782 ] <Fatal> BaseDaemon: Integer divide by zero.
[LAPTOP-] 2021.12.01 15:41:35.722100 [ 8782 ] <Fatal> BaseDaemon: Stack trace: 0xa0783e8 0xa32a307 0xa473879 0x13f5ff6c 0x13f64d9f 0x13d99e23 0x13d8e0cf 0x13d8d589 0x13d8d31b 0x13d9d8e7 0xa0b9817 0xa0bd21d 0x7faa1cc23609 0x7faa1cb4a293
[LAPTOP-] 2021.12.01 15:41:35.722154 [ 8782 ] <Fatal> BaseDaemon: 2. ? @ 0xa0783e8 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722212 [ 8782 ] <Fatal> BaseDaemon: 3. DB::AvgFraction<DB::Decimal<wide::integer<128ul, int> >, DB::Decimal<wide::integer<128ul, int> > >::divideIfAnyDecimal(unsigned int, unsigned int) const @ 0xa32a307 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722246 [ 8782 ] <Fatal> BaseDaemon: 4. DB::AggregateFunctionAvgBase<DB::Decimal<wide::integer<128ul, int> >, DB::Decimal<wide::integer<128ul, int> >, DB::AggregateFunctionAvgWeighted<unsigned long, DB::Decimal<long> > >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0xa473879 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722286 [ 8782 ] <Fatal> BaseDaemon: 5. DB::WindowTransform::appendChunk(DB::Chunk&) @ 0x13f5ff6c in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722318 [ 8782 ] <Fatal> BaseDaemon: 6. DB::WindowTransform::work() @ 0x13f64d9f in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722349 [ 8782 ] <Fatal> BaseDaemon: 7. DB::ExecutionThreadContext::executeTask() @ 0x13d99e23 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722381 [ 8782 ] <Fatal> BaseDaemon: 8. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x13d8e0cf in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722401 [ 8782 ] <Fatal> BaseDaemon: 9. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x13d8d589 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722433 [ 8782 ] <Fatal> BaseDaemon: 10. DB::PipelineExecutor::execute(unsigned long) @ 0x13d8d31b in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722461 [ 8782 ] <Fatal> BaseDaemon: 11. ? @ 0x13d9d8e7 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722497 [ 8782 ] <Fatal> BaseDaemon: 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa0b9817 in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722538 [ 8782 ] <Fatal> BaseDaemon: 13. ? @ 0xa0bd21d in /usr/bin/clickhouse
[LAPTOP-] 2021.12.01 15:41:35.722557 [ 8782 ] <Fatal> BaseDaemon: 14. ? @ 0x7faa1cc23609 in ?
[LAPTOP-] 2021.12.01 15:41:35.722587 [ 8782 ] <Fatal> BaseDaemon: 15. __clone @ 0x7faa1cb4a293 in ?
[LAPTOP-] 2021.12.01 15:41:36.050135 [ 8782 ] <Fatal> BaseDaemon: Calculated checksum of the binary: 88ED9424A7CA5CCF2B8DE36558700120. There is no information about the reference checksum.
SELECT avgWeighted(a, toDecimal64(c,9)) OVER (PARTITION BY toDecimal64(c,9)) FROM ( SELECT number as a, number as c FROM numbers(10)); -- also crashes.
```
**Expected behavior**
Query works
| https://github.com/ClickHouse/ClickHouse/issues/32053 | https://github.com/ClickHouse/ClickHouse/pull/32303 | 523e23cfcd82ae2ddd64538bff562c62a10d9aeb | 6c16348faa59805ebf44b4bdd92675eee5a2ad17 | "2021-12-01T12:46:02Z" | c++ | "2021-12-07T10:32:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,979 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02155_parse_date_lowcard_default_throw.reference", "tests/queries/0_stateless/02155_parse_date_lowcard_default_throw.sql"] | Error when parseDateTimeBestEffort is invoked on LowCardinality column from the joined subquery | **Describe what's wrong**
parseDateTimeBestEffort does not work when invoked on a LowCardinality column from the joined subquery
**Does it reproduce on recent release?**
Have not tried
**How to reproduce**
Following sql query does not work:
```sql
SELECT parseDateTimeBestEffort(q0.date_field) as parsed_date
FROM (SELECT 1 as pk1) t1
inner join (
SELECT 1 as pk1, toLowCardinality('15-JUL-16') as date_field
) q0 on q0.pk1 = t1.pk1;
```
> e.displayText() = DB::ParsingException: Cannot read DateTime: neither Date nor Time was parsed successfully: while executing 'FUNCTION parseDateTimeBestEffort(date_field :: 0) -> parseDateTimeBestEffort(date_field) LowCardinality(DateTime) : 1' (version 21.3.12.2 (official build))
**Expected behavior**
Should work the same way as for following queries:
```sql
SELECT parseDateTimeBestEffort('12-JUL-16') as parsed_date;
-- |parsed_date|
-- |2016-07-12 00:00:00|
SELECT parseDateTimeBestEffort(toLowCardinality('13-JUL-16')) as parsed_date;
-- |parsed_date|
-- |2016-07-13 00:00:00|
SELECT parseDateTimeBestEffort(date_field) as parsed_date
FROM (
SELECT toInt64(330733) as pk1, toLowCardinality('14-JUL-16') as date_field
);
-- |parsed_date|
-- |2016-07-14 00:00:00|
``` | https://github.com/ClickHouse/ClickHouse/issues/31979 | https://github.com/ClickHouse/ClickHouse/pull/33286 | e879aca58bbccb668eb746df46efa6f530127ab6 | 11f64d6e1f7d903cccf7e697b768ec787b054911 | "2021-11-30T07:26:53Z" | c++ | "2021-12-30T04:19:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,963 | ["src/Interpreters/ExternalDictionariesLoader.cpp"] | XML dictionaries + MatView + 21.8.11 = DB::Exception: Dictionary not found | ```
apt-get install clickhouse-server=21.8.10.19 clickhouse-client=21.8.10.19 clickhouse-common-static=21.8.10.19
cat /etc/clickhouse-server/config.d/defdb.xml
<?xml version="1.0"?>
<yandex>
<default_database>test</default_database>
</yandex>
cat /etc/clickhouse-server/node_dictionary.xml
<dictionaries>
<dictionary>
<name>node</name>
<source>
<file>
<path>/etc/clickhouse-server/node.csv</path>
<format>CSV</format>
</file>
</source>
<lifetime>0</lifetime>
<layout><flat /></layout>
<structure>
<id><name>key</name></id>
<attribute>
<name>name</name>
<type>String</type>
<null_value></null_value>
</attribute>
</structure>
</dictionary>
</dictionaries>
cat /etc/clickhouse-server/node.csv
1,test
SELECT dictGet('node', 'name', toUInt64(1))
┌─dictGet('node', 'name', toUInt64(1))─┐
│ test │
└──────────────────────────────────────┘
create database if not exists test;
create table test.test_null(A Int64) Engine=Null;
create table test.test_mt(S String) Engine=MergeTree order by S;
create materialized view test.test_mv to test_mt as select dictGet('node', 'name', toUInt64(A)) S
from test.test_null;
insert into test_null select 1;
select * from test.test_mt;
┌─S────┐
│ test │
└──────┘
```
```
apt-get install clickhouse-server=21.8.11.4 clickhouse-client=21.8.11.4 clickhouse-common-static=21.8.11.4
/etc/init.d/clickhouse-server restart
SELECT dictGet('node', 'name', toUInt64(1));
┌─dictGet('node', 'name', toUInt64(1))─┐
│ test │
└──────────────────────────────────────┘
insert into test_null select 1;
DB::Exception: Dictionary (`test.node`) not found: While processing dictGet('test.node', 'name', toUInt64(A)) AS S.
``` | https://github.com/ClickHouse/ClickHouse/issues/31963 | https://github.com/ClickHouse/ClickHouse/pull/32187 | 4e62d9f5b13d1690825cd9b70a0b687d158eead6 | 11df9a6bc4ddd4b532c228e0d8bd76d3cc68fa0e | "2021-11-29T21:22:02Z" | c++ | "2021-12-06T12:11:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,962 | ["src/Storages/FileLog/StorageFileLog.cpp", "tests/queries/0_stateless/02125_fix_storage_filelog.reference", "tests/queries/0_stateless/02125_fix_storage_filelog.sql", "tests/queries/0_stateless/02126_fix_filelog.reference", "tests/queries/0_stateless/02126_fix_filelog.sh"] | FileLog is able to read files only in /var/lib/clickhouse/user_files/ | ```
touch /tmp/aaa.csv
CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV');
DB::Exception: The absolute data path should be inside `user_files_path`(/var/lib/clickhouse/user_files/). (BAD_ARGUMENTS)
CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV');
DB::Exception: Metadata files already exist by path: /var/lib/clickhouse/.filelog_storage_metadata/default/log, remove them manually if it is intended. (TABLE_METADATA_ALREADY_EXISTS)
``` | https://github.com/ClickHouse/ClickHouse/issues/31962 | https://github.com/ClickHouse/ClickHouse/pull/31967 | 78224ef273c9934ea9b3ee64461726d47011b214 | 18e200b1e2bf5568befd14be4c82621596c211e4 | "2021-11-29T20:37:31Z" | c++ | "2021-11-30T12:38:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,924 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/01921_datatype_date32.reference", "tests/queries/0_stateless/01921_datatype_date32.sql"] | toString(Date32) works correctly only with the limited range of values | The function `toSting` doesn't work correctly with an argument of the type `Date32` if the value of the argument is outside of the range [1970-01-01, 2149-06-06].
**How to reproduce**
ClickHouse server version 21.11.3 revision 54450.
ClickHouse client version 21.11.3.6 (official build).
Run the query:
```SQL
SELECT T.d date, toString(T.d) dateStr
FROM
(
SELECT '1925-01-01'::Date32 d
UNION ALL SELECT '1969-12-31'::Date32
UNION ALL SELECT '1970-01-01'::Date32
UNION ALL SELECT '2149-06-06'::Date32
UNION ALL SELECT '2149-06-07'::Date32
UNION ALL SELECT '2283-11-11'::Date32
) AS T
ORDER BY T.d
```
Rows returned by the query:
```
┌───────date─┬─dateStr────┐
│ 1925-01-01 │ 2104-06-07 │
│ 1969-12-31 │ 2149-06-06 │
│ 1970-01-01 │ 1970-01-01 │
│ 2149-06-06 │ 2149-06-06 │
│ 2149-06-07 │ 1970-01-01 │
│ 2283-11-11 │ 2104-06-06 │
└────────────┴────────────┘
```
**Expected behavior**
Values in the columns **date** and **dateStr** should be the same.
For example, `toString('1969-12-31'::Date32)` should return the string "1969-12-31", but it returns "2149-06-06".
| https://github.com/ClickHouse/ClickHouse/issues/31924 | https://github.com/ClickHouse/ClickHouse/pull/37775 | 21271726de02a2d50bebb3316edab71602d977c5 | b34782dc6aa907b44c033b224524dc74dd8becfd | "2021-11-28T07:28:51Z" | c++ | "2022-06-02T13:40:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,843 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/StorageReplicatedMergeTree.cpp"] | The problem of data part merging while using zero copy replication | I use zero copy replication by HDFS for test , replicas create table use the same storage_policy so they will use the same data on HDFS, the directory is shared. But now, I met a problem, the data part merge cannot be done?
I understand replicas should use shared directory on HDFS, but why data part cannot be merged? Is my operation wrong?
`2021.11.24 20:14:58.443628 [ 48666 ] {} default.hits_v1_1116_1 (2cf38cbb-8471-4d06-acf3-8cbb84711d06): auto DB::StorageReplicatedMergeTree::processQueueEntry(ReplicatedMergeTreeQueue::SelectedEntryPtr)::(anonymous class)::operator()(DB::StorageReplicatedMergeTree::LogEntryPtr &) const: Code: 84. DB::Exception: Directory /home/disk7/zcy/hdfs-ck/data-01/data/disks/hdfs1/store/2cf/2cf38cbb-8471-4d06-acf3-8cbb84711d06/tmp_merge_201403_6_11_1/ already exists. (DIRECTORY_ALREADY_EXISTS), Stack trace (when copying this message, always include the lines below):
Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x12dc58ec in ?
DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0x9616ada in ?
DB::MergeTreeDataMergerMutator::mergePartsToTemporaryPart(DB::FutureMergedMutatedPart const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::MergeListElement&, std::__1::shared_ptrDB::RWLockImpl::LockHolderImpl&, long, std::__1::shared_ptr<DB::Context const>, std::__1::unique_ptr<DB::IReservation, std::__1::default_deleteDB::IReservation > const&, bool, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > > > const&, DB::MergeTreeData::MergingParams const&, DB::IMergeTreeDataPart const*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) @ 0x1099e802 in ?
DB::StorageReplicatedMergeTree::tryExecuteMerge(DB::ReplicatedMergeTreeLogEntry const&) @ 0x107483d8 in ?
DB::StorageReplicatedMergeTree::executeLogEntry(DB::ReplicatedMergeTreeLogEntry&) @ 0x1073cae9 in ?
bool std::__1::__function::__policy_invoker<bool (std::__1::shared_ptrDB::ReplicatedMergeTreeLogEntry&)>::__call_impl<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::processQueueEntry(std::__1::shared_ptrDB::ReplicatedMergeTreeQueue::SelectedEntry)::$_14, bool (std::__1::shared_ptrDB::ReplicatedMergeTreeLogEntry&)> >(std::__1::__function::__policy_storage const*, std::__1::shared_ptrDB::ReplicatedMergeTreeLogEntry&) @ 0x107c2bbf in ?
DB::ReplicatedMergeTreeQueue::processEntry(std::__1::function<std::__1::shared_ptrzkutil::ZooKeeper ()>, std::__1::shared_ptrDB::ReplicatedMergeTreeLogEntry&, std::__1::function<bool (std::__1::shared_ptrDB::ReplicatedMergeTreeLogEntry&)>) @ 0x10ab85a9 in ?
DB::StorageReplicatedMergeTree::processQueueEntry(std::__1::shared_ptrDB::ReplicatedMergeTreeQueue::SelectedEntry) @ 0x10773ebd in ?
void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::IBackgroundJobExecutor::execute(DB::JobAndPool)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x108ddcf6 in ?
ThreadPoolImpl::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x964c28b in ?
ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()>(void&&, void ThreadPoolImpl::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x964d8ff in ?
ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x964a9ab in ?
void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct >, void ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()> >(void*) @ 0x964cad3 in ?
start_thread @ 0x318b207851 in ?
__clone @ 0x318aee767d in ?
(version 21.10.2.1)` | https://github.com/ClickHouse/ClickHouse/issues/31843 | https://github.com/ClickHouse/ClickHouse/pull/32201 | 9867d75fecb48e82605fb8eeeea095ebd83062ce | 25427719d40e521846187e68294f9141ed037327 | "2021-11-26T03:01:21Z" | c++ | "2021-12-10T13:29:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,819 | ["src/Functions/EmptyImpl.h", "tests/queries/0_stateless/02124_empty_uuid.reference", "tests/queries/0_stateless/02124_empty_uuid.sql"] | incorrect results for empty() of UUID | ```sql
CREATE TABLE some_table
(
`date` DateTime,
`banner_id` UUID
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(date)
ORDER BY date
---- ALL ROWS ARE 0 = ALL EMPTY
insert into some_table select today()+rand()%4, toUUID('00000000-0000-0000-0000-000000000000')
from numbers(100);
SELECT *
FROM
(
SELECT
banner_id,
empty(banner_id) AS empt
FROM some_table
LIMIT 10000000 -- to prevent pred. pushdown
)
WHERE (empt = 0) AND (banner_id = toUUID('00000000-0000-0000-0000-000000000000'));
0 rows in set. Elapsed: 0.001 sec. ---- expected result. All OK.
```
```sql
-- 1/13 of rows are empty
insert into some_table select today()+rand()%4,
if(rand(1)%13=0, toUUID('00000000-0000-0000-0000-000000000000'), generateUUIDv4())
from numbers(100);
SELECT *
FROM
(
SELECT
banner_id,
empty(banner_id) AS empt
FROM some_table
LIMIT 10000000 -- to prevent pred. pushdown
)
WHERE (empt = 0) AND (banner_id = toUUID('00000000-0000-0000-0000-000000000000'))
Query id: 533a2476-b4d6-4fcf-b13f-568cf6acba2f
┌─banner_id────────────────────────────┬─empt─┐
│ 00000000-0000-0000-0000-000000000000 │ 0 │
│ 00000000-0000-0000-0000-000000000000 │ 0 │
│ 00000000-0000-0000-0000-000000000000 │ 0 │
│ 00000000-0000-0000-0000-000000000000 │ 0 │
│ 00000000-0000-0000-0000-000000000000 │ 0 │
│ 00000000-0000-0000-0000-000000000000 │ 0 │
└──────────────────────────────────────┴──────┘
```
expected result
```
0 rows in set. Elapsed: 0.001 sec.
```
empt column should be = 1 , because
```
SELECT empty(toUUID('00000000-0000-0000-0000-000000000000'))
┌─empty(toUUID('00000000-0000-0000-0000-000000000000'))─┐
│ 1 │
└───────────────────────────────────────────────────────┘
``` | https://github.com/ClickHouse/ClickHouse/issues/31819 | https://github.com/ClickHouse/ClickHouse/pull/31883 | 83391977f88ad74b025bde24d9f576a9530c79f5 | 75ac0f72bc3c94725ad69380d8f64eddfaba14a1 | "2021-11-25T16:17:16Z" | c++ | "2021-11-26T21:23:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,687 | ["src/Interpreters/OptimizeIfWithConstantConditionVisitor.cpp", "tests/queries/0_stateless/02125_constant_if_condition_and_not_existing_column.reference", "tests/queries/0_stateless/02125_constant_if_condition_and_not_existing_column.sql"] | Missing unused column in "if" expression when upgrading to 21.11.4.14 | 21.11.4.14
```
create table test (x String) Engine=StripeLog
select if(toUInt8(0), y, 42) from test
```
Throws exception `DB::Exception: Missing columns: 'y'`
Excpecting 42
btw `select if(0, y, 42) from test` works fine
| https://github.com/ClickHouse/ClickHouse/issues/31687 | https://github.com/ClickHouse/ClickHouse/pull/31866 | e943be340a6932578cd8647cde766185a4698037 | bab8ea144bccd386dd044fb54892c36d404c6839 | "2021-11-24T10:12:33Z" | c++ | "2021-11-30T02:54:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,686 | ["src/Processors/Formats/IRowInputFormat.cpp", "tests/queries/0_stateless/00418_input_format_allow_errors.sh"] | input_format_allow_errors_num don't allow to skip bad IPv4 | ```
echo '::' | clickhouse-local --structure 'i IPv4' --query='SELECT * FROM table' --input_format_allow_errors_num=1
``` | https://github.com/ClickHouse/ClickHouse/issues/31686 | https://github.com/ClickHouse/ClickHouse/pull/31697 | 072b4a3ba674858fcab7fbf56508b9b8d193818c | fe7f21acf91228ff60e9eddb7f662bec6b8fecbe | "2021-11-24T10:01:30Z" | c++ | "2021-11-25T08:31:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,680 | ["src/Storages/JoinSettings.cpp", "src/Storages/JoinSettings.h", "src/Storages/StorageJoin.cpp", "src/Storages/StorageJoin.h", "tests/queries/0_stateless/02127_storage_join_settings_with_persistency.reference", "tests/queries/0_stateless/02127_storage_join_settings_with_persistency.sql"] | Persistent setting of Join table | В документации для таблицы типа Join есть перечень настроек, которые можно указать при создании, в частности join_any_take_last_row и persistent, при одновременном использовании возникает ошибка создания таблицы.
Ошибка Code: 115. DB::Exception: Unknown setting join_any_take_last_row: for storage Join. (UNKNOWN_SETTING) (version 21.11.4.14 (official build))
>create table internal.lookup (key UInt64, value UInt64) engine = Join(any, left, key)
settings join_any_take_last_row = 1, persistent = 0;
Значения параметров и их очередность не играет роли.
Однако если указать только один любой из этих двух параметр, то все работает.
Комбинации с другими настройками работают, пока не добавишь persistent
>create table internal.lookup (key UInt64, value UInt64) engine = Join(any, left, key)
settings
join_use_nulls = 0,
max_rows_in_join = 0,
max_bytes_in_join = 0,
join_overflow_mode = 'throw',
join_any_take_last_row = 0;
При добавлении persistent начинает ругаться на Unknown setting любого из указанных параметров. | https://github.com/ClickHouse/ClickHouse/issues/31680 | https://github.com/ClickHouse/ClickHouse/pull/32066 | 3d047747ed23f32a2536050f6917844261f57d35 | 5d7dfc6eb9a2b7af3e354be68d072a2c2849e1bb | "2021-11-24T08:23:17Z" | c++ | "2021-12-03T09:06:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,639 | ["CMakeLists.txt", "cmake/ld.lld.in", "cmake/split_debug_symbols.cmake", "cmake/tools.cmake", "src/Common/Elf.cpp", "src/Common/SymbolIndex.cpp", "tests/queries/0_stateless/02161_addressToLineWithInlines.sql", "tests/queries/0_stateless/02420_stracktrace_debug_symbols.reference", "tests/queries/0_stateless/02420_stracktrace_debug_symbols.sh"] | addressToLine misbehaves in master | Note filenames in the backtrace
```
tail -10 clickhouse-server.err.log
5. DB::TCPHandler::receiveHello() @ 0x131f2503 in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
6. DB::TCPHandler::runImpl() @ 0x131ebd82 in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
7. DB::TCPHandler::run() @ 0x13200499 in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
8. Poco::Net::TCPServerConnection::start() @ 0x15e8358f in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
9. Poco::Net::TCPServerDispatcher::run() @ 0x15e85981 in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
10. Poco::PooledThread::run() @ 0x15f9c1c9 in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
11. Poco::ThreadImpl::runnableEntry(void*) @ 0x15f99900 in /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug
12. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.12.1.8761 (official build))
```
Playing with 00974_query_profile.sql I see the same
```
select addressToLine(toUInt64(293109982));
SELECT addressToLine(toUInt64(293109982))
Query id: 7d7353c9-f4f7-4f81-8d96-3e9e2b6defc6
┌─addressToLine(toUInt64(293109982))───────────────────────────────────────┐
│ /usr/lib/debug/.build-id/03/836aa56cfad660466eb42a7973e7053e525ccb.debug │
└──────────────────────────────────────────────────────────────────────────┘
```
Unfortunately, the test does not treat this as failure because condition 'symbol LIKE '%Source%';' is true for some records
It might be that the issue caused by something strange at my box.
Please, let me know if the issue is reproducible (or not).
| https://github.com/ClickHouse/ClickHouse/issues/31639 | https://github.com/ClickHouse/ClickHouse/pull/40873 | 365438d6172cb643603d59a81c12eb3f10d4c5e6 | 499e479892b68414f087a19759fe3600508e3bb3 | "2021-11-22T16:19:03Z" | c++ | "2022-09-07T15:31:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,631 | ["src/AggregateFunctions/AggregateFunctionCategoricalInformationValue.cpp", "tests/queries/0_stateless/02427_msan_group_array_resample.reference", "tests/queries/0_stateless/02427_msan_group_array_resample.sql"] | SUMMARY: MemorySanitizer: use-of-uninitialized-value | (you don't have to strictly follow this form)
**Describe the bug**
https://s3.amazonaws.com/clickhouse-test-reports/31476/c034491da95909fa5a6b1a9a4f72a609fcf30d7c/fuzzer_astfuzzermsan,actions//report.html
Assertion of memory sanitizer:
```
SUMMARY: MemorySanitizer: use-of-uninitialized-value obj-x86_64-linux-gnu/../src/Common/PODArray.h:132:13 in DB::PODArrayBase<8ul, 32ul, DB::MixedArenaAllocator<4096ul, Allocator, DB::AlignedArenaAllocator<8ul>, 8ul>, 0ul, 0ul>::dealloc() Received signal -3 Received signal Unknown signal (-3) Received signal 6 Received signal Aborted (6)
```
**How to reproduce**
```sql
SELECT arrayMap(x -> finalizeAggregation(x), state) FROM (SELECT groupArrayResample(9223372036854775806, 1048575, 65537)(number, number % 3), groupArrayStateResample(10, 2147483648, 65535)(number, number % 9223372036854775806) AS state FROM numbers(100))
```
| https://github.com/ClickHouse/ClickHouse/issues/31631 | https://github.com/ClickHouse/ClickHouse/pull/41460 | df52df83f9195d6d2c26b00fa04fd904b0c8f1a6 | 036e8820971c8562dc2084af1bd378709e00333f | "2021-11-22T12:56:05Z" | c++ | "2022-09-19T03:28:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,606 | ["src/Interpreters/InterpreterSelectQuery.cpp", "tests/queries/0_stateless/02131_multiply_row_policies_on_same_column.reference", "tests/queries/0_stateless/02131_multiply_row_policies_on_same_column.sql"] | row policies multiple OR to IN rewrite bug. | **Describe what's wrong**
If you have more than 2 row policies with the same column used in conditions, ClickHouse tries to rewrite them in IN clause, but it doesn't work.
**Does it reproduce on recent release?**
Yes.
ClickHouse 21.11, 21.8
**How to reproduce**
```
CREATE TABLE test_row
(
`key` UInt32,
`value` UInt32
)
ENGINE = MergeTree
ORDER BY key;
INSERT INTO test_row SELECT
number,
number
FROM numbers(10);
CREATE ROW POLICY IF NOT EXISTS key_1 ON test_row FOR SELECT USING key =1 TO default;
CREATE ROW POLICY IF NOT EXISTS key_2 ON test_row FOR SELECT USING key =2 TO default;
SELECT * FROM test_row;
SELECT *
FROM test_row
Query id: c282f548-cf06-45b6-a187-b0ee37062514
┌─key─┬─value─┐
│ 1 │ 1 │
│ 2 │ 2 │
└─────┴───────┘
2 rows in set. Elapsed: 0.005 sec.
CREATE ROW POLICY IF NOT EXISTS key_3 ON test_row FOR SELECT USING key =3 TO default;
SELECT *
FROM test_row
0 rows in set. Elapsed: 0.017 sec.
Received exception from server (version 21.12.1):
Code: 35. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function "or" should be at least 2: passed 1: While processing or(key IN (1, 2, 3)), key, value. (TOO_FEW_ARGUMENTS_FOR_FUNCTION)
SET optimize_min_equality_disjunction_chain_length = 10;
SELECT *
FROM test_row
WHERE 1
Query id: 74325911-824d-4121-80de-cb5d9819cf77
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.12.1):
Code: 35. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function "or" should be at least 2: passed 1: While processing or(key IN (1, 2, 3)), key, value. (TOO_FEW_ARGUMENTS_FOR_FUNCTION)
```
**Expected behavior**
Query works.
**Additional context**
Looks like that or chain rewrite doesn't work correctly for row policies.
| https://github.com/ClickHouse/ClickHouse/issues/31606 | https://github.com/ClickHouse/ClickHouse/pull/32291 | 3fafe641d0dae6c9a5c6094a76a37b7eb47e30a2 | d112b30d7818e89d0e0b9e47f8da957fb7e04f1a | "2021-11-21T16:42:41Z" | c++ | "2021-12-15T22:52:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,538 | ["src/Interpreters/MutationsInterpreter.cpp", "tests/queries/0_stateless/02008_materialize_column.sql"] | Crash on materializing column | **Describe what's wrong**
Crash on executing `alter table tbl_name materialize column col_name`. Column is `col_name Uint8`
**Does it reproduce on recent release?**
Reproducible from official Docker image version 21.11.3.6
**How to reproduce**
* 21.11.3.6
* Using CLI client
**Expected behavior**
Column is materialized
**Error message and/or stacktrace**
```
[e27227f83079] 2021.11.19 14:03:57.811793 [ 403 ] <Fatal> BaseDaemon: ########################################
[e27227f83079] 2021.11.19 14:03:57.811982 [ 403 ] <Fatal> BaseDaemon: (version 21.11.3.6 (official build), build id: 18F71364524E9B66F4365E590A43D87EF75AD9BA) (from thread 102) (query_id: 406c69fb-b639-4080-980c-0f6f69d4c1a2) Received signal Segmentation fault (11)
[e27227f83079] 2021.11.19 14:03:57.811997 [ 403 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
[e27227f83079] 2021.11.19 14:03:57.812012 [ 403 ] <Fatal> BaseDaemon: Stack trace: 0x12569d28 0x12566a4d 0x1203c121 0x1203a869 0x1273d189 0x1273b113 0x130ca9f0 0x130de499 0x15d193cf 0x15d1b7c1 0x15e30169 0x15e2d8a0 0x7f862cabc609 0x7f862c9b6293
[e27227f83079] 2021.11.19 14:03:57.812217 [ 403 ] <Fatal> BaseDaemon: 2. DB::MutationsInterpreter::prepare(bool) @ 0x12569d28 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812242 [ 403 ] <Fatal> BaseDaemon: 3. DB::MutationsInterpreter::MutationsInterpreter(std::__1::shared_ptr<DB::IStorage>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::MutationCommands, std::__1::shared_ptr<DB::Context const>, bool) @ 0x12566a4d in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812273 [ 403 ] <Fatal> BaseDaemon: 4. DB::InterpreterAlterQuery::executeToTable(DB::ASTAlterQuery const&) @ 0x1203c121 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812287 [ 403 ] <Fatal> BaseDaemon: 5. DB::InterpreterAlterQuery::execute() @ 0x1203a869 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812302 [ 403 ] <Fatal> BaseDaemon: 6. ? @ 0x1273d189 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812316 [ 403 ] <Fatal> BaseDaemon: 7. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x1273b113 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812338 [ 403 ] <Fatal> BaseDaemon: 8. DB::TCPHandler::runImpl() @ 0x130ca9f0 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812349 [ 403 ] <Fatal> BaseDaemon: 9. DB::TCPHandler::run() @ 0x130de499 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812366 [ 403 ] <Fatal> BaseDaemon: 10. Poco::Net::TCPServerConnection::start() @ 0x15d193cf in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812377 [ 403 ] <Fatal> BaseDaemon: 11. Poco::Net::TCPServerDispatcher::run() @ 0x15d1b7c1 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812388 [ 403 ] <Fatal> BaseDaemon: 12. Poco::PooledThread::run() @ 0x15e30169 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812400 [ 403 ] <Fatal> BaseDaemon: 13. Poco::ThreadImpl::runnableEntry(void*) @ 0x15e2d8a0 in /usr/bin/clickhouse
[e27227f83079] 2021.11.19 14:03:57.812411 [ 403 ] <Fatal> BaseDaemon: 14. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[e27227f83079] 2021.11.19 14:03:57.812425 [ 403 ] <Fatal> BaseDaemon: 15. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
[e27227f83079] 2021.11.19 14:03:57.934476 [ 403 ] <Fatal> BaseDaemon: Checksum of the binary: 9911BB0CC38DE5BCDEC7F55FCCAACC88, integrity check passed.
```
**Additional context**
> Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/31538 | https://github.com/ClickHouse/ClickHouse/pull/32464 | 6cfb1177325fea43f043aa5f60aebf20bba20082 | 9867d75fecb48e82605fb8eeeea095ebd83062ce | "2021-11-19T14:12:26Z" | c++ | "2021-12-10T13:27:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,524 | ["src/Columns/ColumnDecimal.cpp", "src/Columns/ColumnFixedString.cpp", "src/Columns/ColumnVector.cpp", "src/Columns/ColumnsCommon.cpp", "src/Columns/ColumnsCommon.h"] | Failed to run clickhouse-tests on custom build ClickHouse Power9 platform | **Describe the unexpected behaviour**
After successfully built ClickHouse binary on Power9 platform,I run `clickhouse-tests`,and clickhouse server segment fault.
**How to reproduce**
v21.11.4.14-stable
Run `clickhouse-tests`.
Several problems occured when run `clickhouse-tests`:
1. There's no output on query `SELECT value FROM system.merge_tree_settings WHERE name = 'min_bytes_for_wide_part' `:
```
root@tserver-overlayregen:/clickhouse/ClickHouse/tests# ./clickhouse-test -b /clickhouse/ClickHouse/build/programs/clickhouse
Using queries from 'queries' directory
Using /clickhouse/ClickHouse/build/programs/clickhouse-client as client program (expecting split build)
Connecting to ClickHouse server... OK
Traceback (most recent call last):
File "./clickhouse-test", line 1442, in <module>
main(args)
File "./clickhouse-test", line 1140, in main
args.build_flags = collect_build_flags(args)
File "./clickhouse-test", line 1032, in collect_build_flags
value = int(clickhouse_execute(args, "SELECT value FROM system.merge_tree_settings WHERE name = 'min_bytes_for_wide_part'"))
ValueError: invalid literal for int() with base 10: b''
```
```
root@tserver-overlayregen:/clickhouse/ClickHouse/build# programs/clickhouse client
ClickHouse client version 21.11.4.14.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.11.4 revision 54450.
tserver-overlayregen :) Cannot load data for command line suggestions: Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column String position 0: expected 83, got 20. (LOGICAL_ERROR) (version 21.11.4.14)
tserver-overlayregen :) SELECT value FROM system.merge_tree_settings WHERE name = 'min_bytes_for_wide_part';
SELECT value
FROM system.merge_tree_settings
WHERE name = 'min_bytes_for_wide_part'
Query id: 9653c397-cbd4-499f-a3e4-85ee6808f044
Ok.
0 rows in set. Elapsed: 0.006 sec.
```
2. After modifying `clickhouse-tests` to pass the query,it successfully to run next tests,and ClickHouse server segment fault:
```
2021.11.19 14:49:57.892883 [ 318580 ] {8328680e-22de-4e88-bc29-02399199c6ec} <Error> executeQuery: Code: 70. DB::Exception: Conversion from UInt64 to AggregateFunction(count) is not supported: while converting source column `count()` to destination column `count()`. (CANNOT_CONVERT_TYPE) (version 21.11.4.14) (from 127.0.0.1:52288) (comment: 02096_rename_atomic_hang.sql) (in query: select count() from db_hang.test_mv;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x22f0e7d0 in ?
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1795b5cc in /clickhouse/ClickHouse/build/programs/clickhouse
2. DB::FunctionCast<DB::CastInternalName>::createAggregateFunctionWrapper(std::__1::shared_ptr<DB::IDataType const> const&, DB::DataTypeAggregateFunction const*) const @ 0x1a71b524 in /clickhouse/ClickHouse/build/programs/clickhouse
3. DB::FunctionCast<DB::CastInternalName>::prepareImpl(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, bool) const @ 0x1a717a7c in /clickhouse/ClickHouse/build/programs/clickhouse
4. DB::FunctionCast<DB::CastInternalName>::prepareRemoveNullable(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, bool) const @ 0x1a716a68 in /clickhouse/ClickHouse/build/programs/clickhouse
5. DB::FunctionCast<DB::CastInternalName>::prepareUnpackDictionaries(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&) const @ 0x1a714b74 in /clickhouse/ClickHouse/build/programs/clickhouse
6. DB::FunctionCast<DB::CastInternalName>::prepare(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x1a714288 in /clickhouse/ClickHouse/build/programs/clickhouse
7. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x20dccf98 in ?
8. DB::ActionsDAG::makeConvertingActions(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, DB::ActionsDAG::MatchColumnsMode, bool, bool, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >*) @ 0x20dd8d20 in ?
9. DB::StorageMaterializedView::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x219b8de0 in ?
10. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x212b6b44 in ?
11. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) @ 0x212b06d8 in ?
12. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x212b00b0 in ?
13. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x21467f08 in ?
14. DB::InterpreterSelectWithUnionQuery::execute() @ 0x21468c00 in ?
15. DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x21681d5c in ?
16. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x216801a4 in ?
17. DB::TCPHandler::runImpl() @ 0x2201f744 in ?
18. DB::TCPHandler::run() @ 0x22031678 in ?
19. Poco::Net::TCPServerConnection::start() @ 0x22e46c2c in ?
20. Poco::Net::TCPServerDispatcher::run() @ 0x22e474f4 in ?
21. Poco::PooledThread::run() @ 0x22f980f4 in ?
22. Poco::(anonymous namespace)::RunnableHolder::run() @ 0x22f960ac in ?
23. Poco::ThreadImpl::runnableEntry(void*) @ 0x22f94870 in ?
24. start_thread @ 0x9818 in /usr/lib/powerpc64le-linux-gnu/libpthread-2.31.so
2021.11.19 14:49:57.893197 [ 318580 ] {8328680e-22de-4e88-bc29-02399199c6ec} <Error> TCPHandler: Code: 70. DB::Exception: Conversion from UInt64 to AggregateFunction(count) is not supported: while converting source column `count()` to destination column `count()`. (CANNOT_CONVERT_TYPE), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x22f0e7d0 in ?
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1795b5cc in /clickhouse/ClickHouse/build/programs/clickhouse
2. DB::FunctionCast<DB::CastInternalName>::createAggregateFunctionWrapper(std::__1::shared_ptr<DB::IDataType const> const&, DB::DataTypeAggregateFunction const*) const @ 0x1a71b524 in /clickhouse/ClickHouse/build/programs/clickhouse
3. DB::FunctionCast<DB::CastInternalName>::prepareImpl(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, bool) const @ 0x1a717a7c in /clickhouse/ClickHouse/build/programs/clickhouse
4. DB::FunctionCast<DB::CastInternalName>::prepareRemoveNullable(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, bool) const @ 0x1a716a68 in /clickhouse/ClickHouse/build/programs/clickhouse
5. DB::FunctionCast<DB::CastInternalName>::prepareUnpackDictionaries(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&) const @ 0x1a714b74 in /clickhouse/ClickHouse/build/programs/clickhouse
6. DB::FunctionCast<DB::CastInternalName>::prepare(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x1a714288 in /clickhouse/ClickHouse/build/programs/clickhouse
7. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x20dccf98 in ?
8. DB::ActionsDAG::makeConvertingActions(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, DB::ActionsDAG::MatchColumnsMode, bool, bool, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >*) @ 0x20dd8d20 in ?
9. DB::StorageMaterializedView::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x219b8de0 in ?
10. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x212b6b44 in ?
11. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) @ 0x212b06d8 in ?
12. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x212b00b0 in ?
13. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x21467f08 in ?
14. DB::InterpreterSelectWithUnionQuery::execute() @ 0x21468c00 in ?
15. DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x21681d5c in ?
16. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x216801a4 in ?
17. DB::TCPHandler::runImpl() @ 0x2201f744 in ?
18. DB::TCPHandler::run() @ 0x22031678 in ?
19. Poco::Net::TCPServerConnection::start() @ 0x22e46c2c in ?
20. Poco::Net::TCPServerDispatcher::run() @ 0x22e474f4 in ?
21. Poco::PooledThread::run() @ 0x22f980f4 in ?
22. Poco::(anonymous namespace)::RunnableHolder::run() @ 0x22f960ac in ?
23. Poco::ThreadImpl::runnableEntry(void*) @ 0x22f94870 in ?
24. start_thread @ 0x9818 in /usr/lib/powerpc64le-linux-gnu/libpthread-2.31.so
2021.11.19 14:49:57.893334 [ 318580 ] {8328680e-22de-4e88-bc29-02399199c6ec} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2021.11.19 14:49:57.893358 [ 318580 ] {} <Debug> TCPHandler: Processed in 0.002127228 sec.
2021.11.19 14:49:57.893636 [ 318580 ] {} <Debug> TCPHandler: Done processing connection.
2021.11.19 14:49:57.893671 [ 318580 ] {} <Debug> TCP-Session: aef81ccf-fd9b-41a2-aef8-1ccffd9bd1a2 Destroying unnamed session of user 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.899873 [ 318579 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: POST, Address: 127.0.0.1:41212, User-Agent: (none), Length: 0, Content Type: , Transfer Encoding: identity, X-Forwarded-For: (none)
2021.11.19 14:49:57.899949 [ 318579 ] {} <Trace> DynamicQueryHandler: Request URI: /?query=DROP+DATABASE+test_vrfyu7&database=system&connect_timeout=599&receive_timeout=599&send_timeout=599&http_connection_timeout=599&http_receive_timeout=599&http_send_timeout=599&log_comment=02096_rename_atomic_hang.sql
2021.11.19 14:49:57.899997 [ 318579 ] {} <Debug> HTTP-Session: 13945d52-0212-415a-9394-5d520212e15a Authenticating user 'default' from 127.0.0.1:41212
2021.11.19 14:49:57.900032 [ 318579 ] {} <Debug> HTTP-Session: 13945d52-0212-415a-9394-5d520212e15a Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.900067 [ 318579 ] {} <Debug> HTTP-Session: 13945d52-0212-415a-9394-5d520212e15a Creating query context from global context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: <NOT SET>
2021.11.19 14:49:57.900364 [ 318579 ] {3b54260b-e5ef-4cd7-a8ec-5574a482ac57} <Debug> executeQuery: (from 127.0.0.1:41212) (comment: 02096_rename_atomic_hang.sql) DROP DATABASE test_vrfyu7
2021.11.19 14:49:57.900418 [ 318579 ] {3b54260b-e5ef-4cd7-a8ec-5574a482ac57} <Trace> ContextAccess (default): Access granted: DROP DATABASE ON test_vrfyu7.*
2021.11.19 14:49:57.900813 [ 318579 ] {3b54260b-e5ef-4cd7-a8ec-5574a482ac57} <Debug> DynamicQueryHandler: Done processing query
2021.11.19 14:49:57.900870 [ 318579 ] {3b54260b-e5ef-4cd7-a8ec-5574a482ac57} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2021.11.19 14:49:57.900916 [ 318579 ] {} <Debug> HTTP-Session: 13945d52-0212-415a-9394-5d520212e15a Destroying unnamed session of user 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.906555 [ 318579 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: POST, Address: 127.0.0.1:41214, User-Agent: (none), Length: 0, Content Type: , Transfer Encoding: identity, X-Forwarded-For: (none)
2021.11.19 14:49:57.906632 [ 318579 ] {} <Trace> DynamicQueryHandler: Request URI: /?query=CREATE+DATABASE+test_hjqesr&database=system&connect_timeout=30&receive_timeout=30&send_timeout=30&http_connection_timeout=30&http_receive_timeout=30&http_send_timeout=30&log_comment=02096_totals_global_in_bug.sql
2021.11.19 14:49:57.906677 [ 318579 ] {} <Debug> HTTP-Session: fa1dab93-e23f-4869-ba1d-ab93e23fb869 Authenticating user 'default' from 127.0.0.1:41214
2021.11.19 14:49:57.906712 [ 318579 ] {} <Debug> HTTP-Session: fa1dab93-e23f-4869-ba1d-ab93e23fb869 Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.906752 [ 318579 ] {} <Debug> HTTP-Session: fa1dab93-e23f-4869-ba1d-ab93e23fb869 Creating query context from global context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: <NOT SET>
2021.11.19 14:49:57.907012 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Debug> executeQuery: (from 127.0.0.1:41214) (comment: 02096_totals_global_in_bug.sql) CREATE DATABASE test_hjqesr
2021.11.19 14:49:57.907062 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Trace> ContextAccess (default): Access granted: CREATE DATABASE ON test_hjqesr.*
2021.11.19 14:49:57.911481 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Information> DatabaseAtomic (test_hjqesr): Metadata processed, database test_hjqesr has 0 tables and 0 dictionaries in total.
2021.11.19 14:49:57.911514 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Information> TablesLoader: Parsed metadata of 0 tables in 1 databases in 6.2704e-05 sec
2021.11.19 14:49:57.911537 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Information> TablesLoader: Loading 0 tables with 0 dependency level
2021.11.19 14:49:57.911559 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Information> DatabaseAtomic (test_hjqesr): Starting up tables.
2021.11.19 14:49:57.911696 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Debug> DynamicQueryHandler: Done processing query
2021.11.19 14:49:57.911740 [ 318579 ] {120b8211-eb3f-4079-9b66-c04f476b3776} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2021.11.19 14:49:57.911777 [ 318579 ] {} <Debug> HTTP-Session: fa1dab93-e23f-4869-ba1d-ab93e23fb869 Destroying unnamed session of user 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.941043 [ 318580 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: 127.0.0.1:52294
2021.11.19 14:49:57.941245 [ 318580 ] {} <Debug> TCPHandler: Connected ClickHouse client version 21.11.0, revision: 54450, database: test_hjqesr, user: default.
2021.11.19 14:49:57.941295 [ 318580 ] {} <Debug> TCP-Session: 74113ef2-cdb4-4bbd-b411-3ef2cdb4bbbd Authenticating user 'default' from 127.0.0.1:52294
2021.11.19 14:49:57.941336 [ 318580 ] {} <Debug> TCP-Session: 74113ef2-cdb4-4bbd-b411-3ef2cdb4bbbd Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.941392 [ 318580 ] {} <Debug> TCP-Session: 74113ef2-cdb4-4bbd-b411-3ef2cdb4bbbd Creating session context with user_id: 94309d50-4f52-5250-31bd-74fecac179db
2021.11.19 14:49:57.941526 [ 318580 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2021.11.19 14:49:57.941567 [ 318580 ] {} <Trace> ContextAccess (default): List of all grants: GRANT ALL ON *.* WITH GRANT OPTION
2021.11.19 14:49:57.941601 [ 318580 ] {} <Trace> ContextAccess (default): List of all grants including implicit: GRANT ALL ON *.* WITH GRANT OPTION
2021.11.19 14:49:57.941935 [ 318580 ] {} <Debug> TCP-Session: 74113ef2-cdb4-4bbd-b411-3ef2cdb4bbbd Creating query context from session context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: default
2021.11.19 14:49:57.942609 [ 318580 ] {a9c87b52-d059-478e-a532-cfb402d3fffc} <Debug> executeQuery: (from 127.0.0.1:52294) (comment: 02096_totals_global_in_bug.sql) select sum(number) from remote('127.0.0.{2,3}', numbers(2)) where number global in (select sum(number) from numbers(2) group by number with totals) group by number with totals
2021.11.19 14:49:57.943877 [ 318580 ] {a9c87b52-d059-478e-a532-cfb402d3fffc} <Trace> ContextAccess (default): Access granted: CREATE TEMPORARY TABLE, REMOTE ON *.*
2021.11.19 14:49:57.944020 [ 318580 ] {a9c87b52-d059-478e-a532-cfb402d3fffc} <Trace> Connection (127.0.0.2:9000): Connecting. Database: (not specified). User: default
Segmentation fault
```
```
root@tserver-overlayregen:/clickhouse/ClickHouse/tests# ./clickhouse-test -b /clickhouse/ClickHouse/build/programs/clickhouse
Using queries from 'queries' directory
Using /clickhouse/ClickHouse/build/programs/clickhouse-client as client program (expecting split build)
Connecting to ClickHouse server... OK
Running 3524 stateless tests (MainProcess).
02114_hdfs_bad_url: [ FAIL ] - result differs with reference:
--- /clickhouse/ClickHouse/tests/queries/0_stateless/02114_hdfs_bad_url.reference 2021-11-16 05:55:33.000000000 +0000
+++ /clickhouse/ClickHouse/tests/queries/0_stateless/02114_hdfs_bad_url.stdout 2021-11-19 14:41:54.336000000 +0000
@@ -1,17 +1,17 @@
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
-OK
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
+FAIL
Database: test_u32k62
02113_format_row_bug: [ OK ]
02113_hdfs_assert: [ FAIL ] - result differs with reference:
--- /clickhouse/ClickHouse/tests/queries/0_stateless/02113_hdfs_assert.reference 2021-11-16 05:55:33.000000000 +0000
+++ /clickhouse/ClickHouse/tests/queries/0_stateless/02113_hdfs_assert.stdout 2021-11-19 14:41:54.572000000 +0000
@@ -1 +1 @@
-OK
+FAIL
Database: test_mp7wrb
02112_skip_index_set_and_or: [ OK ]
02112_parse_date_yyyymmdd: [ OK ]
02111_modify_table_comment: [ OK ]
02111_function_mapExtractKeyLike: [ OK ]
02111_with_fill_no_rows: [ OK ]
02110_clickhouse_local_custom_tld: [ OK ]
02104_clickhouse_local_columns_description: [ OK ]
02103_sql_user_defined_functions_composition: [ OK ]
02102_sql_user_defined_functions_create_if_not_exists: [ OK ]
02101_sql_user_defined_functions_drop_if_exists: [ OK ]
02101_sql_user_defined_functions_create_or_replace: [ FAIL ] - result differs with reference:
--- /clickhouse/ClickHouse/tests/queries/0_stateless/02101_sql_user_defined_functions_create_or_replace.reference 2021-11-16 05:55:33.000000000 +0000
+++ /clickhouse/ClickHouse/tests/queries/0_stateless/02101_sql_user_defined_functions_create_or_replace.stdout 2021-11-19 14:41:55.588000000 +0000
@@ -1,4 +1,67 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
CREATE FUNCTION `02101_test_function` AS x -> (x + 1)
2
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
CREATE FUNCTION `02101_test_function` AS x -> (x + 2)
3
Database: test_3bntfx
02100_now64_types_bug: [ OK ]
02100_limit_push_down_bug: [ OK ]
02100_alter_scalar_circular_deadlock: [ OK ]
02100_replaceRegexpAll_bug: [ OK ]
02099_hashed_array_dictionary_complex_key: [ OK ]
02099_sql_user_defined_functions_lambda: [ OK ]
02098_sql_user_defined_functions_aliases: [ OK ]
02098_hashed_array_dictionary_simple_key: [ OK ]
02098_date32_comparison: [ OK ]
02097_remove_sample_by: [ SKIPPED ] - no zookeeper
02097_polygon_dictionary_store_key: [ OK ]
02097_initializeAggregationNullable: [ OK ]
02097_default_dict_get_add_database: [ OK ]
02096_rename_atomic_hang: [ FAIL ] - return code: 70
[tserver-overlayregen] 2021.11.19 14:41:57.824042 [ 315690 ] {29e82268-2635-434f-954e-044ac139daf7} <Error> executeQuery: Code: 70. DB::Exception: Conversion from UInt64 to AggregateFunction(count) is not supported: while converting source column `count()` to destination column `count()`. (CANNOT_CONVERT_TYPE) (version 21.11.4.14) (from 127.0.0.1:39746) (comment: 02096_rename_atomic_hang.sql) (in query: select count() from db_hang.test_mv;), Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x22f0e7d0 in ?
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1795b5cc in /clickhouse/ClickHouse/build/programs/clickhouse
2. DB::FunctionCast<DB::CastInternalName>::createAggregateFunctionWrapper(std::__1::shared_ptr<DB::IDataType const> const&, DB::DataTypeAggregateFunction const*) const @ 0x1a71b524 in /clickhouse/ClickHouse/build/programs/clickhouse
3. DB::FunctionCast<DB::CastInternalName>::prepareImpl(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, bool) const @ 0x1a717a7c in /clickhouse/ClickHouse/build/programs/clickhouse
4. DB::FunctionCast<DB::CastInternalName>::prepareRemoveNullable(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, bool) const @ 0x1a716a68 in /clickhouse/ClickHouse/build/programs/clickhouse
5. DB::FunctionCast<DB::CastInternalName>::prepareUnpackDictionaries(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&) const @ 0x1a714b74 in /clickhouse/ClickHouse/build/programs/clickhouse
6. DB::FunctionCast<DB::CastInternalName>::prepare(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&) const @ 0x1a714288 in /clickhouse/ClickHouse/build/programs/clickhouse
7. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x20dccf98 in ?
8. DB::ActionsDAG::makeConvertingActions(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, DB::ActionsDAG::MatchColumnsMode, bool, bool, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >*) @ 0x20dd8d20 in ?
9. DB::StorageMaterializedView::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x219b8de0 in ?
10. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x212b6b44 in ?
11. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) @ 0x212b06d8 in ?
12. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x212b00b0 in ?
13. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x21467f08 in ?
14. DB::InterpreterSelectWithUnionQuery::execute() @ 0x21468c00 in ?
15. DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x21681d5c in ?
16. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x216801a4 in ?
17. DB::TCPHandler::runImpl() @ 0x2201f744 in ?
18. DB::TCPHandler::run() @ 0x22031678 in ?
19. Poco::Net::TCPServerConnection::start() @ 0x22e46c2c in ?
20. Poco::Net::TCPServerDispatcher::run() @ 0x22e474f4 in ?
21. Poco::PooledThread::run() @ 0x22f980f4 in ?
22. Poco::(anonymous namespace)::RunnableHolder::run() @ 0x22f960ac in ?
23. Poco::ThreadImpl::runnableEntry(void*) @ 0x22f94870 in ?
24. start_thread @ 0x9818 in /usr/lib/powerpc64le-linux-gnu/libpthread-2.31.so
Received exception from server (version 21.11.4):
Code: 70. DB::Exception: Received from localhost:9000. DB::Exception: Conversion from UInt64 to AggregateFunction(count) is not supported: while converting source column `count()` to destination column `count()`. (CANNOT_CONVERT_TYPE)
(query: select count() from db_hang.test_mv;)
, result:
2000
stdout:
2000
Database: test_01a9d0
02096_totals_global_in_bug: [ UNKNOWN ] - Test internal error: ConnectionRefusedError
[Errno 111] Connection refused
File "./clickhouse-test", line 644, in run
proc, stdout, stderr, total_time = self.run_single_test(server_logs_level, client_options)
File "./clickhouse-test", line 599, in run_single_test
clickhouse_execute(args, "DROP DATABASE " + database, timeout=seconds_left, settings={
File "./clickhouse-test", line 106, in clickhouse_execute
return clickhouse_execute_http(base_args, query, timeout, settings).strip()
File "./clickhouse-test", line 97, in clickhouse_execute_http
client.request('POST', '/?' + base_args.client_options_query_str + urllib.parse.urlencode(params))
File "/usr/lib/python3.8/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1007, in _send_output
self.send(msg)
File "/usr/lib/python3.8/http/client.py", line 947, in send
self.connect()
File "/usr/lib/python3.8/http/client.py", line 918, in connect
self.sock = self._create_connection(
02096_bad_options_in_client_and_local: [ UNKNOWN ] - Test internal error: ConnectionRefusedError
```
**Expected behavior**
No segment fault
**Error message and/or stacktrace**
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/31524 | https://github.com/ClickHouse/ClickHouse/pull/31574 | 711b738dd1a22469a5eccfd65386f00a6a035023 | 796c76638a198f8e48b68d1e3fbdb2235a4b56aa | "2021-11-19T07:25:19Z" | c++ | "2021-11-20T22:16:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,503 | ["utils/check-style/check-style"] | Check for files that differ only by character case (they should not exist in the repo) | Add this to Style Check:
```
milovidov@milovidov-desktop:~/work/ClickHouse/contrib/sysroot$ find . | sort -f | uniq -i -c | awk '{ if ($1 > 1) print }'
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter_ipv4/ipt_ecn.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter_ipv4/ipt_ttl.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter_ipv6/ip6t_hl.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter/xt_connmark.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter/xt_dscp.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter/xt_mark.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter/xt_rateest.h
2 ./linux-aarch64/aarch64-linux-gnu/libc/usr/include/linux/netfilter/xt_tcpmss.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter_ipv4/ipt_ecn.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter_ipv4/ipt_ttl.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter_ipv6/ip6t_hl.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter/xt_connmark.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter/xt_dscp.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter/xt_mark.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter/xt_rateest.h
2 ./linux-powerpc64le/powerpc64le-linux-gnu/libc/usr/include/linux/netfilter/xt_tcpmss.h
2 ./linux-riscv64/usr/include/linux/netfilter_ipv4/ipt_ecn.h
2 ./linux-riscv64/usr/include/linux/netfilter_ipv4/ipt_ttl.h
2 ./linux-riscv64/usr/include/linux/netfilter_ipv6/ip6t_hl.h
2 ./linux-riscv64/usr/include/linux/netfilter/xt_connmark.h
2 ./linux-riscv64/usr/include/linux/netfilter/xt_dscp.h
2 ./linux-riscv64/usr/include/linux/netfilter/xt_mark.h
2 ./linux-riscv64/usr/include/linux/netfilter/xt_rateest.h
2 ./linux-riscv64/usr/include/linux/netfilter/xt_tcpmss.h
``` | https://github.com/ClickHouse/ClickHouse/issues/31503 | https://github.com/ClickHouse/ClickHouse/pull/31834 | 619ad46340e00b68a27a07caa1fed6e656c91150 | 16ff5c0b3e2e30a89dd4e027d6d2b1fcf5a8cbe9 | "2021-11-18T09:28:21Z" | c++ | "2021-11-26T00:10:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,471 | ["src/AggregateFunctions/AggregateFunctionFactory.cpp", "src/Common/Macros.cpp", "src/Common/Macros.h", "src/Databases/DatabaseFactory.cpp", "src/Databases/DatabaseReplicated.h", "src/Databases/DatabaseReplicatedHelpers.cpp", "src/Databases/DatabaseReplicatedHelpers.h", "src/Databases/TablesLoader.cpp", "src/Storages/MergeTree/registerStorageMergeTree.cpp", "tests/integration/test_replicated_database/test.py"] | Pass shard and replica from DatabaseReplicated to create table | **Describe the issue**
We have to set macro or to pass explicitly shard and replica name to create table in DatabaseReplicated
| https://github.com/ClickHouse/ClickHouse/issues/31471 | https://github.com/ClickHouse/ClickHouse/pull/31488 | 2bef313f75e4cacc6ea2ef2133e8849ecf0385ec | 7a43a87f5b33c93db73d373ea83ae110b47cc1b8 | "2021-11-17T11:45:52Z" | c++ | "2021-11-23T09:41:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,449 | ["src/Compression/CachedCompressedReadBuffer.cpp", "tests/queries/0_stateless/02124_uncompressed_cache.reference", "tests/queries/0_stateless/02124_uncompressed_cache.sql"] | query does not return the expected results but mixing columns | ### Context
A computed W2V dataset is created and inserted into ClickHouse using these operations and to reproduce this issue included a copy of the dataset
```SQL
-- cat vectors/part* | clickhouse-client -h localhost --query="insert into ot.ml_w2v_log format JSONEachRow "
create database if not exists ot;
create table if not exists ot.ml_w2v_log
(
category String,
word String,
norm Float64,
vector Array(Float64)
) engine = Log;
create table if not exists ot.ml_w2v
engine = MergeTree()
order by (word)
primary key (word)
as
select category,
word,
norm,
vector
from (select category, word, norm, vector from ot.ml_w2v_log);
```
The used query to get the top words from some category is
```SQL
WITH (
SELECT sumForEach(vector)
FROM ot.ml_w2v
PREWHERE (in(word,('CHEMBL1737')))
) AS vv,
sqrt(arraySum(x -> x*x,vv)) AS vvnorm,
if(and(notEquals(vvnorm,0.0),notEquals(norm,0.0)),divide(arraySum(x -> x.1 * x.2,arrayZip(vv,vector)),multiply(norm,vvnorm)),0.0) AS similarity
SELECT category, word, similarity
FROM ot.ml_w2v
PREWHERE (in(category,('disease')))
WHERE (greaterOrEquals(similarity,0.1))
ORDER BY similarity DESC
LIMIT 10 OFFSET 0;
```
The expected output is a list of `>= 0` elements coming just from category `disease`. In the last release, it gets words from a category that does not belong to.
output with versions
```
ClickHouse client version 21.9.4.35 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.9.4 revision 54449.
```
```SQL
WITH
(
SELECT sumForEach(vector)
FROM ot.ml_w2v
PREWHERE word IN ('CHEMBL1737')
) AS vv,
sqrt(arraySum(x -> (x * x), vv)) AS vvnorm,
if((vvnorm != 0.) AND (norm != 0.), arraySum(x -> ((x.1) * (x.2)), arrayZip(vv, vector)) / (norm * vvnorm), 0.) AS similarity
SELECT
category,
word,
similarity
FROM ot.ml_w2v
PREWHERE category IN ('disease')
WHERE similarity >= 0.1
ORDER BY similarity DESC
LIMIT 0, 10
Query id: a4fbcdab-0bd7-4cdd-816f-fc5508154f25
┌─category─┬─word────────────┬─────────similarity─┐
│ disease │ EFO_0004234 │ 0.7696740602266341 │
│ disease │ EFO_1000466 │ 0.6860139858343028 │
│ disease │ MONDO_0001999 │ 0.6183489436097988 │
│ disease │ HP_0001667 │ 0.6032877790181791 │
│ disease │ EFO_0009085 │ 0.5989707188199231 │
│ disease │ Orphanet_156629 │ 0.5800397436624738 │
│ disease │ EFO_0009196 │ 0.579683404421804 │
│ disease │ EFO_0001361 │ 0.56757779832079 │
│ disease │ HP_0200023 │ 0.565599936638102 │
│ disease │ MONDO_0001574 │ 0.5552950663929026 │
└──────────┴─────────────────┴────────────────────┘
10 rows in set. Elapsed: 0.029 sec. Processed 47.47 thousand rows, 19.90 MB (1.66 million rows/s., 694.93 MB/s.)
```
and this output is expected. But when the same query is executed against the latest version
```
ClickHouse client version 21.11.3.6 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.11.3 revision 54450.
```
```SQL
WITH
(
SELECT sumForEach(vector)
FROM ot.ml_w2v
PREWHERE word IN ('CHEMBL1737')
) AS vv,
sqrt(arraySum(x -> (x * x), vv)) AS vvnorm,
if((vvnorm != 0.) AND (norm != 0.), arraySum(x -> ((x.1) * (x.2)), arrayZip(vv, vector)) / (norm * vvnorm), 0.) AS similarity
SELECT
category,
word,
similarity
FROM ot.ml_w2v
PREWHERE category IN ('disease')
WHERE similarity >= 0.1
ORDER BY similarity DESC
LIMIT 0, 10
Query id: 707cefb6-1c3b-427f-85ac-44a813020bab
┌─category─┬─word──────────┬─────────similarity─┐
│ disease │ CHEMBL2110641 │ 12.066153256687638 │
│ disease │ CHEMBL1887891 │ 8.441367361572434 │
│ disease │ CHEMBL219376 │ 8.42932989841403 │
│ disease │ CHEMBL2108401 │ 6.813855928884503 │
│ disease │ CHEMBL1927030 │ 5.82952056420569 │
│ disease │ CHEMBL1464 │ 5.501304739078533 │
│ disease │ CHEMBL2107880 │ 5.351481348458378 │
│ disease │ CHEMBL2103836 │ 5.082651728915573 │
│ disease │ CHEMBL2219640 │ 4.849979327914108 │
│ disease │ CHEMBL1644695 │ 4.6276729651167345 │
└──────────┴───────────────┴────────────────────┘
10 rows in set. Elapsed: 0.027 sec. Processed 47.13 thousand rows, 19.61 MB (1.75 million rows/s., 728.89 MB/s.)
```
this output is not expected as there is no word starting with `CHEMBL` in the category `disease`. I guess the way the similarity comes is by not computing it correctly. Might it be the way to take the vector from the query in the WITH section either?
| https://github.com/ClickHouse/ClickHouse/issues/31449 | https://github.com/ClickHouse/ClickHouse/pull/31826 | 06b8421cb66fcda280ca0edb7aa3389812fa5e30 | 120cb79bac477708cc85461e88f60dbe36d12e48 | "2021-11-16T12:42:48Z" | c++ | "2021-11-27T14:36:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,426 | ["src/Functions/FunctionsAES.h", "tests/queries/0_stateless/02124_encrypt_decrypt_nullable.reference", "tests/queries/0_stateless/02124_encrypt_decrypt_nullable.sql"] | Exception while using aes_encrypt_mysql function | ClickHouse v.21.8.10.19
Got exсeption when aes_encrypt_mysql got unexpected result as input
test case:
````
create table test_table
(
value Nullable(String),
encrypted_value Nullable(String)
)
engine MergeTree order by assumeNotNull(value);
insert into test_table
select 'test_value1', aes_encrypt_mysql('aes-256-ecb', 'test_value1', 'test_key________________________');
insert into test_table
select '', aes_encrypt_mysql('aes-256-ecb', '', 'test_key________________________');
insert into test_table
select null, aes_encrypt_mysql('aes-256-ecb', null, 'test_key________________________');
select aes_decrypt_mysql('aes-256-ecb', value, 'test_key________________________')
from test_table
````
got `Failed to decrypt. OpenSSL error code: 503316603: while executing 'FUNCTION aes_decrypt_mysql` as exception | https://github.com/ClickHouse/ClickHouse/issues/31426 | https://github.com/ClickHouse/ClickHouse/pull/31707 | 0c7e56df5bbd0d4948c4ce039294e94e129bc6aa | d1e1255e38d44ae02c38e281419a48931f40a0c0 | "2021-11-15T14:18:14Z" | c++ | "2021-11-28T13:21:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,417 | ["src/Columns/ColumnDecimal.cpp", "src/Columns/ColumnFixedString.cpp", "src/Columns/ColumnVector.cpp", "src/Columns/ColumnsCommon.cpp", "src/Columns/ColumnsCommon.h"] | Invalid number or rows when starting ClickHouse on M1 ARM64 | **Describe what's wrong**
When starting a new empty ClickHouse server on a MacBook Pro M1 there seem to be some problems with the initial system table setup. Connecting the client gives an exception, as do a number of subsequent queries against a standard dataset - I'll focus on the first here.
I found two recent reports on Slack of what seems to be the same problem:
https://clickhousedb.slack.com/archives/CU478UEQZ/p1636612826400000
https://clickhousedb.slack.com/archives/CU478UEQZ/p1636485760380400?thread_ts=1636466896.364500&cid=CU478UEQZ
**Does it reproduce on recent release?**
For ARM64 I can only see a download link for the current master build so this relates to https://builds.clickhouse.com/master/macos-aarch64/clickhouse
**How to reproduce**
1. Download and run the current build for ARM64
```
wget 'https://builds.clickhouse.com/master/macos-aarch64/clickhouse'
chmod a+x ./clickhouse
./clickhouse server
```
2. Start the client
`./clickhouse client`
3. Error messages in both client and server (see below), query autocomplete in the client does not work.
**Expected behavior**
There should be no error messages and query autocomplete should work.
**Error message and/or stacktrace**
On the server:
```
2021.11.15 10:38:08.011131 [ 709567 ] {5b495f2e-63bb-4291-a3b6-a80399c5e86b} <Error> executeQuery: Code: 49. DB::Exception: Invalid number of rows in Chunk column String position 0: expected 72, got 12. (LOGICAL_ERROR) (version 21.12.1.8751 (official build)) (from [::1]:59809) (in query: SELECT DISTINCT arrayJoin(extractAll(name, '[\\w_]{2,}')) AS res FROM (SELECT name FROM system.functions UNION ALL SELECT name FROM system.table_engines UNION ALL SELECT name FROM system.formats UNION ALL SELECT name FROM system.table_functions UNION ALL SELECT name FROM system.data_type_families UNION ALL SELECT name FROM system.merge_tree_settings UNION ALL SELECT name FROM system.settings UNION ALL SELECT cluster FROM system.clusters UNION ALL SELECT macro FROM system.macros UNION ALL SELECT policy_name FROM system.storage_policies UNION ALL SELECT concat(func.name, comb.name) FROM system.functions AS func CROSS JOIN system.aggregate_function_combinators AS comb WHERE is_aggregate UNION ALL SELECT name FROM system.databases LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.tables LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.dictionaries LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.columns LIMIT 10000) WHERE notEmpty(res)), Stack trace (when copying this message, always include the lines below):
<Empty trace>
2021.11.15 10:38:08.011167 [ 709567 ] {5b495f2e-63bb-4291-a3b6-a80399c5e86b} <Error> TCPHandler: Code: 49. DB::Exception: Invalid number of rows in Chunk column String position 0: expected 72, got 12. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
<Empty trace>
```
On the client:
```
$ ./clickhouse client
ClickHouse client version 21.12.1.8751 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.12.1 revision 54450.
mars :) Cannot load data for command line suggestions: Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column String position 0: expected 72, got 12. (LOGICAL_ERROR) (version 21.12.1.8751 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/31417 | https://github.com/ClickHouse/ClickHouse/pull/31574 | 711b738dd1a22469a5eccfd65386f00a6a035023 | 796c76638a198f8e48b68d1e3fbdb2235a4b56aa | "2021-11-15T10:51:29Z" | c++ | "2021-11-20T22:16:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,365 | ["docs/tools/README.md", "docs/tools/build.py", "docs/tools/single_page.py", "website/README.md", "website/templates/docs/sidebar.html"] | PDF version is broken or link is broken a long time ago | null | https://github.com/ClickHouse/ClickHouse/issues/31365 | https://github.com/ClickHouse/ClickHouse/pull/31366 | f82fd18511642c3c224e1b7173969d59bd35ea68 | babf171a1d42c56d3c1e4387543cc34c0b8e697e | "2021-11-12T20:14:23Z" | c++ | "2021-11-12T22:22:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,361 | ["tests/queries/0_stateless/02524_fuzz_and_fuss.reference", "tests/queries/0_stateless/02524_fuzz_and_fuss.sql"] | Logical error: Invalid Field get from type Decimal64 to type Decimal128 | https://clickhouse-test-reports.s3.yandex.net/0/7a615a29f4bc7c62c43ef8cff59040c717bde975/fuzzer_debug/report.html#fail1
```
2021.11.12 13:31:58.791272 [ 156 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Debug> executeQuery: (from [::1]:35760) SELECT [9223372036854775806, 1048575], [], sumMap(val, [toDateTime64([CAST(1., 'Decimal(10,2)'), CAST(10.000100135803223, 'Decimal(10,2)')], NULL), CAST(-0., 'Decimal(10,2)')], cnt) FROM (SELECT toDateTime64('0.0000001023', [1025, 256], '102.5', NULL), [NULL], [CAST('a', 'FixedString(1)'), CAST('', 'FixedString(1)')] AS val, [1024, 100] AS cnt)
2021.11.12 13:31:58.796081 [ 156 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.11.12 13:31:58.800478 [ 156 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.11.12 13:31:58.813484 [ 156 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> ContextAccess (default): Access granted: SELECT(dummy) ON system.one
2021.11.12 13:31:58.814235 [ 156 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.11.12 13:31:58.814659 [ 156 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.11.12 13:31:58.820864 [ 381 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> AggregatingTransform: Aggregating
2021.11.12 13:31:58.820937 [ 381 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> Aggregator: Aggregation method: without_key
2021.11.12 13:31:58.821240 [ 381 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Debug> AggregatingTransform: Aggregated. 1 to 1 rows (from 80.00 B) in 0.003779052 sec. (264.617 rows/sec., 20.67 KiB/sec.)
2021.11.12 13:31:58.821317 [ 381 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Trace> Aggregator: Merging aggregated data
2021.11.12 13:31:58.822096 [ 381 ] {cc111396-5fa8-43a5-b527-58c27c537cc9} <Fatal> : Logical error: 'Invalid Field get from type Decimal64 to type Decimal128'.
2021.11.12 13:31:58.823780 [ 384 ] {} <Fatal> BaseDaemon: ########################################
2021.11.12 13:31:58.824116 [ 384 ] {} <Fatal> BaseDaemon: (version 21.12.1.8724 (official build), build id: 31793282E8B4A3C8206EAE739D382EC943FC7C1E) (from thread 381) (query_id: cc111396-5fa8-43a5-b527-58c27c537cc9) Received signal Aborted (6)
2021.11.12 13:31:58.824577 [ 384 ] {} <Fatal> BaseDaemon:
2021.11.12 13:31:58.824976 [ 384 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fed3aa4d18b 0x7fed3aa2c859 0x15094bb8 0x15094cc2 0x150ff2e6 0x16e54c5a 0x18b6e21d 0x2218fbd5 0x2218fba5 0x16ea8efb 0x21d3d7a4 0x21b4d8e3 0x21b3df46 0x21b3d6c9 0x2310b336 0x23107dbf 0x22de2af9 0x22de2a5f 0x22de29fd 0x22de29bd 0x22de2995 0x22de295d 0x150e1886 0x150e0915 0x22de1405 0x22de1d85 0x22ddfceb 0x22ddefb3 0x22dffa01 0x22dff920 0x22dff89d 0x22dff841 0x22dff752 0x22dff63b 0x22dff4fd 0x22dff4bd 0x22dff495 0x22dff460 0x150e1886 0x150e0915 0x1510c96f
2021.11.12 13:31:58.825433 [ 384 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.11.12 13:31:58.825623 [ 384 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.11.12 13:31:58.913977 [ 384 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:51: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x15094bb8 in /workspace/clickhouse
2021.11.12 13:31:58.986521 [ 384 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:58: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x15094cc2 in /workspace/clickhouse
2021.11.12 13:31:59.075298 [ 384 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Common/Exception.h:40: DB::Exception::Exception<DB::Field::Types::Which&, DB::Field::Types::Which const&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Field::Types::Which&, DB::Field::Types::Which const&) @ 0x150ff2e6 in /workspace/clickhouse
2021.11.12 13:31:59.913873 [ 384 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Core/Field.h:785: DB::NearestFieldTypeImpl<std::__1::decay<DB::Decimal<wide::integer<128ul, int> > >::type, void>::Type& DB::Field::get<DB::Decimal<wide::integer<128ul, int> > >() @ 0x16e54c5a in /workspace/clickhouse
2021.11.12 13:32:00.633409 [ 384 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Core/Field.h:404: auto const& DB::Field::get<DB::Decimal<wide::integer<128ul, int> > >() const @ 0x18b6e21d in /workspace/clickhouse
2021.11.12 13:32:00.873940 [ 384 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Core/Field.h:819: DB::Decimal<wide::integer<128ul, int> > DB::get<DB::Decimal<wide::integer<128ul, int> > >(DB::Field const&) @ 0x2218fbd5 in /workspace/clickhouse
2021.11.12 13:32:01.065464 [ 384 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Columns/ColumnDecimal.h:111: DB::ColumnDecimal<DB::Decimal<wide::integer<128ul, int> > >::insert(DB::Field const&) @ 0x2218fba5 in /workspace/clickhouse
2021.11.12 13:32:02.049082 [ 384 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/AggregateFunctions/AggregateFunctionSumMap.h:353: DB::AggregateFunctionMapBase<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::AggregateFunctionSumMap<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, false, false>, DB::FieldVisitorSum, false, false, true>::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x16ea8efb in /workspace/clickhouse
2021.11.12 13:32:03.175084 [ 384 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Interpreters/Aggregator.cpp:1332: void DB::Aggregator::insertAggregatesIntoColumns<char*>(char*&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0x21d3d7a4 in /workspace/clickhouse
2021.11.12 13:32:04.284175 [ 384 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Interpreters/Aggregator.cpp:0: DB::Aggregator::prepareBlockAndFillWithoutKey(DB::AggregatedDataVariants&, bool, bool) const::$_1::operator()(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, bool) const @ 0x21b4d8e3 in /workspace/clickhouse
2021.11.12 13:32:04.971068 [ 384 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../src/Interpreters/Aggregator.cpp:1576: DB::Block DB::Aggregator::prepareBlockAndFill<DB::Aggregator::prepareBlockAndFillWithoutKey(DB::AggregatedDataVariants&, bool, bool) const::$_1&>(DB::AggregatedDataVariants&, bool, unsigned long, DB::Aggregator::prepareBlockAndFillWithoutKey(DB::AggregatedDataVariants&, bool, bool) const::$_1&) const @ 0x21b3df46 in /workspace/clickhouse
2021.11.12 13:32:05.658381 [ 384 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../src/Interpreters/Aggregator.cpp:1678: DB::Aggregator::prepareBlockAndFillWithoutKey(DB::AggregatedDataVariants&, bool, bool) const @ 0x21b3d6c9 in /workspace/clickhouse
2021.11.12 13:32:05.969102 [ 384 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../src/Processors/Transforms/AggregatingTransform.cpp:339: DB::ConvertingAggregatedToChunksTransform::initialize() @ 0x2310b336 in /workspace/clickhouse
2021.11.12 13:32:06.280675 [ 384 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../src/Processors/Transforms/AggregatingTransform.cpp:174: DB::ConvertingAggregatedToChunksTransform::work() @ 0x23107dbf in /workspace/clickhouse
2021.11.12 13:32:06.510948 [ 384 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:88: DB::executeJob(DB::IProcessor*) @ 0x22de2af9 in /workspace/clickhouse
2021.11.12 13:32:06.723554 [ 384 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:105: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x22de2a5f in /workspace/clickhouse
2021.11.12 13:32:06.950810 [ 384 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x22de29fd in /workspace/clickhouse
2021.11.12 13:32:07.158747 [ 384 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x22de29bd in /workspace/clickhouse
2021.11.12 13:32:07.361668 [ 384 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()>::operator()() @ 0x22de2995 in /workspace/clickhouse
2021.11.12 13:32:07.565669 [ 384 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x22de295d in /workspace/clickhouse
2021.11.12 13:32:07.616187 [ 384 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x150e1886 in /workspace/clickhouse
2021.11.12 13:32:07.665368 [ 384 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x150e0915 in /workspace/clickhouse
2021.11.12 13:32:07.857795 [ 384 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:602: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x22de1405 in /workspace/clickhouse
2021.11.12 13:32:08.049969 [ 384 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:487: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x22de1d85 in /workspace/clickhouse
2021.11.12 13:32:08.220967 [ 384 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:826: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x22ddfceb in /workspace/clickhouse
2021.11.12 13:32:08.427795 [ 384 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:407: DB::PipelineExecutor::execute(unsigned long) @ 0x22ddefb3 in /workspace/clickhouse
2021.11.12 13:32:08.568357 [ 384 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:85: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x22dffa01 in /workspace/clickhouse
2021.11.12 13:32:08.707437 [ 384 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:113: DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const @ 0x22dff920 in /workspace/clickhouse
2021.11.12 13:32:08.847330 [ 384 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) @ 0x22dff89d in /workspace/clickhouse
2021.11.12 13:32:08.988025 [ 384 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x22dff841 in /workspace/clickhouse
2021.11.12 13:32:09.127436 [ 384 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) @ 0x22dff752 in /workspace/clickhouse
2021.11.12 13:32:09.251298 [ 384 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:188: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() @ 0x22dff63b in /workspace/clickhouse
2021.11.12 13:32:09.391272 [ 384 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) @ 0x22dff4fd in /workspace/clickhouse
2021.11.12 13:32:09.531044 [ 384 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&...) @ 0x22dff4bd in /workspace/clickhouse
2021.11.12 13:32:09.669984 [ 384 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>::operator()() @ 0x22dff495 in /workspace/clickhouse
2021.11.12 13:32:09.808585 [ 384 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x22dff460 in /workspace/clickhouse
2021.11.12 13:32:09.858183 [ 384 ] {} <Fatal> BaseDaemon: 42. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x150e1886 in /workspace/clickhouse
2021.11.12 13:32:09.905166 [ 384 ] {} <Fatal> BaseDaemon: 43. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x150e0915 in /workspace/clickhouse
2021.11.12 13:32:09.971151 [ 384 ] {} <Fatal> BaseDaemon: 44. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:274: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x1510c96f in /workspace/clickhouse
2021.11.12 13:32:11.307841 [ 384 ] {} <Fatal> BaseDaemon: Checksum of the binary: 1E8B119750F0B37B2A4085F3613DABCC, integrity check passed.
``` | https://github.com/ClickHouse/ClickHouse/issues/31361 | https://github.com/ClickHouse/ClickHouse/pull/45034 | 0d60f564decd77e6af9305acdf45f5c18da4c793 | 2254855f09256708fbec70ce545ad7e7e1f4aeac | "2021-11-12T16:04:22Z" | c++ | "2023-01-08T07:21:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,349 | ["src/QueryPipeline/RemoteQueryExecutor.cpp", "tests/queries/0_stateless/02116_global_in_time_limit.reference", "tests/queries/0_stateless/02116_global_in_time_limit.sh"] | execution of the query ignores the max_execution_time setting | **Describe what's wrong**
some requests do not respect the limits set by the max_execution_time setting and are executed for an arbitrarily long time.
this has been seen with queries containing the expressions `global in (_subquery)` or `global join _data`.
the trace_log for these queries contains calls inside the `DB::Context::initializeexternaltablesifset()`
it is expected that during the execution of the request, the server will check from time to time whether it has gone beyond the limits.
**How to reproduce**
```
┌─version()──┐
│ 21.8.10.19 │
└────────────┘
```
clickhouse-client, http
execute query `select column, count() from table1 where col global in (select col from table2 where cond) group by column` with distributed tables table1/table2 on sufficiently large cluster.
**Expected behavior**
after the timeout expires, the query ends with an error of 159/160/209
**Actual behavior**
remote servers spend unlimited time executing their part of an external distributed request. trace_log of the corresponding requests looks like this:
```
/usr/bin/clickhouse DB::Block::~Block()
/usr/bin/clickhouse std::__1::__shared_ptr_pointer<std::__1::vector<DB::Block, std::__1::allocator<DB::Block> > const*, std::__1::default_delete<std::__1::vector<DB::Block, std::__1::allocator<DB::Block> > const>, std::__1::allocator<std::__1::vector<DB::Block, std::__1::allocator<DB::Block> > const> >::__on_zero_shared()
/usr/bin/clickhouse DB::MemoryBlockOutputStream::writeSuffix()
/usr/bin/clickhouse DB::TCPHandler::receiveData(bool)
/usr/bin/clickhouse DB::TCPHandler::receivePacket()
/usr/bin/clickhouse DB::TCPHandler::readDataNext(unsigned long, long)
/usr/bin/clickhouse
/usr/bin/clickhouse DB::Context::initializeExternalTablesIfSet()
/usr/bin/clickhouse
/usr/bin/clickhouse DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool)
/usr/bin/clickhouse DB::TCPHandler::runImpl()
/usr/bin/clickhouse DB::TCPHandler::run()
/usr/bin/clickhouse Poco::Net::TCPServerConnection::start()
/usr/bin/clickhouse Poco::Net::TCPServerDispatcher::run()
/usr/bin/clickhouse Poco::PooledThread::run()
/usr/bin/clickhouse Poco::ThreadImpl::runnableEntry(void*)
/lib/x86_64-linux-gnu/libpthread-2.19.so start_thread
/lib/x86_64-linux-gnu/libc-2.19.so clone
/usr/bin/clickhouse operator new(unsigned long)
/usr/bin/clickhouse std::__1::__hash_table<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long>, std::__1::__unordered_map_hasher<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::__unordered_map_equal<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::allocator<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, unsigned long> > >::__rehash(unsigned long)
/usr/bin/clickhouse void std::__1::allocator<DB::Block>::construct<DB::Block, DB::Block&>(DB::Block*, DB::Block&)
/usr/bin/clickhouse DB::MemoryBlockOutputStream::writeSuffix()
/usr/bin/clickhouse DB::TCPHandler::receiveData(bool)
/usr/bin/clickhouse DB::TCPHandler::receivePacket()
/usr/bin/clickhouse DB::TCPHandler::readDataNext(unsigned long, long)
/usr/bin/clickhouse
/usr/bin/clickhouse DB::Context::initializeExternalTablesIfSet()
/usr/bin/clickhouse
/usr/bin/clickhouse DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool)
/usr/bin/clickhouse DB::TCPHandler::runImpl()
/usr/bin/clickhouse DB::TCPHandler::run()
/usr/bin/clickhouse Poco::Net::TCPServerConnection::start()
/usr/bin/clickhouse Poco::Net::TCPServerDispatcher::run()
/usr/bin/clickhouse Poco::PooledThread::run()
/usr/bin/clickhouse Poco::ThreadImpl::runnableEntry(void*)
/lib/x86_64-linux-gnu/libpthread-2.19.so start_thread
/lib/x86_64-linux-gnu/libc-2.19.so clone
/usr/bin/clickhouse std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >::vector(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&)
/usr/bin/clickhouse void std::__1::allocator<DB::Block>::construct<DB::Block, DB::Block&>(DB::Block*, DB::Block&)
/usr/bin/clickhouse DB::MemoryBlockOutputStream::writeSuffix()
/usr/bin/clickhouse DB::TCPHandler::receiveData(bool)
/usr/bin/clickhouse DB::TCPHandler::receivePacket()
/usr/bin/clickhouse DB::TCPHandler::readDataNext(unsigned long, long)
/usr/bin/clickhouse
/usr/bin/clickhouse DB::Context::initializeExternalTablesIfSet()
/usr/bin/clickhouse
/usr/bin/clickhouse DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool)
/usr/bin/clickhouse DB::TCPHandler::runImpl()
/usr/bin/clickhouse DB::TCPHandler::run()
/usr/bin/clickhouse Poco::Net::TCPServerConnection::start()
/usr/bin/clickhouse Poco::Net::TCPServerDispatcher::run()
/usr/bin/clickhouse Poco::PooledThread::run()
/usr/bin/clickhouse Poco::ThreadImpl::runnableEntry(void*)
/lib/x86_64-linux-gnu/libpthread-2.19.so start_thread
/lib/x86_64-linux-gnu/libc-2.19.so clone
```
**Additional context**
additional heavy filters (like substring from long string column) in `cond` not only reduce the number of rows returned from the subquery, but also increase the overall query execution time.
| https://github.com/ClickHouse/ClickHouse/issues/31349 | https://github.com/ClickHouse/ClickHouse/pull/31805 | 35622644048341033b125016c9cb380347ed726e | bda8cb6b7ee8feb90a6237b21c03a50f46ffa5ce | "2021-11-12T14:56:54Z" | c++ | "2021-11-26T07:41:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,339 | ["src/Coordination/KeeperStateManager.cpp", "tests/integration/test_keeper_incorrect_config/__init__.py", "tests/integration/test_keeper_incorrect_config/configs/enable_keeper1.xml", "tests/integration/test_keeper_incorrect_config/test.py"] | Add assertion on incorrect raft config | **Describe the unexpected behaviour**
ClickHouse Keeper reached consensus and chose the leader with incorrect configuration:
``` yaml
raft_configuration:
server:
- id: 0
hostname: keeper-0
port: 44444
- id: 1
hostname: keeper-0
port: 44444
- id: 2
hostname: keeper-0
port: 44444
```
Nodes had different ids to achieve this.
**Expected behavior**
Add sanity checks for raft config:
- No duplicate keeper nodes
- No duplicate ids
We can
1) Fail if keeper/server starts with such configuration
2) We should not allow to commit this configuration to log even if bad node proposed such change. | https://github.com/ClickHouse/ClickHouse/issues/31339 | https://github.com/ClickHouse/ClickHouse/pull/32121 | 308eadd83a3e7622736c17aa606174c3bc7a663a | 45021bd35c9694115f353ec30fda3fec1557cab1 | "2021-11-12T11:12:08Z" | c++ | "2021-12-02T19:24:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,315 | ["src/Databases/DDLDependencyVisitor.cpp", "src/Databases/DDLDependencyVisitor.h", "src/Databases/DatabaseMemory.cpp", "src/Databases/DatabaseOrdinary.cpp", "src/Databases/TablesLoader.cpp", "src/Interpreters/InterpreterCreateQuery.cpp", "tests/integration/test_dictionaries_dependency_xml/configs/dictionaries/node.xml", "tests/integration/test_dictionaries_dependency_xml/test.py"] | 21.11 unable to start: Cannot attach 1 tables due to cyclic dependencies | 21.11.3.6
```
cat /etc/clickhouse-server/node.tsv
1,test
cat /etc/clickhouse-server/node_dictionary.xml
<dictionaries>
<dictionary>
<name>node</name>
<source>
<file>
<path>/etc/clickhouse-server/node.tsv</path>
<format>CSV</format>
</file>
</source>
<lifetime>0</lifetime>
<layout><flat /></layout>
<structure>
<id><name>key</name></id>
<attribute>
<name>name</name>
<type>String</type>
<null_value></null_value>
</attribute>
</structure>
</dictionary>
</dictionaries>
create table default.node ( key UInt64, name String ) Engine=Dictionary(node);
select * from default.node;
┌─key─┬─name─┐
│ 1 │ test │
└─────┴──────┘
/etc/init.d/clickhouse-server restart
<Error> Application: DB::Exception: Cannot attach 1 tables due to cyclic dependencies. See server log for details.
<Information> Application: shutting down
``` | https://github.com/ClickHouse/ClickHouse/issues/31315 | https://github.com/ClickHouse/ClickHouse/pull/32288 | d2a606b8aff849d966b00ca3df8b8b5c73e85159 | 657db077955027cc8624ec77ac3ed5f2bbc19c30 | "2021-11-11T19:59:32Z" | c++ | "2021-12-06T17:02:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,264 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md", "docs/ru/engines/table-engines/mergetree-family/mergetree.md", "src/Disks/IVolume.h", "src/Disks/StoragePolicy.cpp", "src/Disks/VolumeJBOD.cpp", "tests/config/config.d/storage_conf_02961.xml", "tests/config/install.sh", "tests/queries/0_stateless/02961_storage_config_volume_priority.reference", "tests/queries/0_stateless/02961_storage_config_volume_priority.sh"] | Defining volume_priority expliclty in storage_configuration | Now a volume_priority is defined by an order in XML.
This is unorthodox way because an order in XML should not have sense.
So please implement `volume_priority`
```
<storage_configuration>
...
<policies>
<moving_from_ssd_to_hdd>
<volumes>
<hot>
<volume_priority>1</volume_priority>
<disk>fast_ssd1</disk>
<disk>fast_ssd2</disk>
<max_data_part_size_bytes>1073741824</max_data_part_size_bytes>
</hot>
<cold>
<volume_priority>2</volume_priority>
<disk>disk1</disk>
</cold>
</volumes>
<move_factor>0.2</move_factor>
</moving_from_ssd_to_hdd>
...
</storage_configuration>
``` | https://github.com/ClickHouse/ClickHouse/issues/31264 | https://github.com/ClickHouse/ClickHouse/pull/58533 | 86ba3c413036a83aa644d3b6ae9043468214cca8 | 9dff4e833162c67772b523a5ca65463e9033c674 | "2021-11-10T22:44:13Z" | c++ | "2024-02-26T19:57:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,251 | ["tests/queries/0_stateless/02125_many_mutations.reference", "tests/queries/0_stateless/02125_many_mutations.sh"] | Clickhorse server has been frequently restarted recently. Sometimes it runs several hours after it breaks down | 2021.11.10 16:55:18.036576 [ 12834 ] {} <Trace> shop.shop_sk_game_order (ReplicatedMergeTreeQueue): Adding mutation 0001047209 for partition 9acaaa3e06ded08bb72dd8cc9d8c01e2 for all block numbers less than 1046914
2021.11.10 16:55:18.036586 [ 12834 ] {} <Trace> shop.shop_sk_game_order (ReplicatedMergeTreeQueue): Adding mutation 0001047209 for partition 9af72a5f3fe6ccac14195ca8f8f059a9 for all block numbers less than 8131
2021.11.10 16:55:18.036599 [ 12834 ] {} <Trace> shop.shop_sk_game_order (ReplicatedMergeTreeQueue): Adding mutation 0001047209 for partition 9b3fc6295832a526e78fb26a747ba4a4 for all block numbers less than 1047119
2021.11.10 16:55:18.036607 [ 12834 ] {} <Trace> shop.shop_sk_game_order (ReplicatedMergeTreeQueue): Adding mutation 0001047209 for partition 9b81529227e47c7bc8e859f432862d59 for all block numbers less than 1046914
2021.11.10 16:55:18.036614 [ 12834 ] {} <Trace> shop.shop_sk_game_order (ReplicatedMergeTreeQueue): Adding mutation 0001047209 for partition 9ba98e02440aae1c8ed7c77f627a3641 for all block numbers less than 686520
...skipping...
2021.11.10 16:56:06.386963 [ 12981 ] {} <Fatal> BaseDaemon: ########################################
2021.11.10 16:56:06.387001 [ 12981 ] {} <Fatal> BaseDaemon: (version 21.10.2.15 (official build), build id: 6699B86599A2121E78E0D42DD67791ABD9AE5265) (from thread 12817) (no query) Received signal Segmentation fault (11)
2021.11.10 16:56:06.387029 [ 12981 ] {} <Fatal> BaseDaemon: Address: 0x7fd562469fb8 Access: write. Attempted access has violated the permissions assigned to the memory area.
2021.11.10 16:56:06.387067 [ 12981 ] {} <Fatal> BaseDaemon: Stack trace: 0x1199e0b5 0x119d6cbb 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209 0x119d6adc 0x119d7209
2021.11.10 16:56:06.387514 [ 12981 ] {} <Fatal> BaseDaemon: 1. DB::ISimpleTransform::prepare() @ 0x1199e0b5 in /usr/bin/clickhouse
2021.11.10 16:56:06.387551 [ 12981 ] {} <Fatal> BaseDaemon: 2. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0x119d6cbb in /usr/bin/clickhouse
2021.11.10 16:56:06.387580 [ 12981 ] {} <Fatal> BaseDaemon: 3. DB::PipelineExecutor::tryAddProcessorToStackIfUpdated(DB::ExecutingGraph::Edge&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, unsigned long) @ 0x119d6adc in /usr/bin/clickhouse
2021.11.10 16:56:06.387600 [ 12981 ] {} <Fatal> BaseDaemon: 4. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0x119d7209 in /usr/bin/clickhouse
2021.11.10 16:56:06.387629 [ 12981 ] {} <Fatal> BaseDaemon: 5. DB::PipelineExecutor::tryAddProcessorToStackIfUpdated(DB::ExecutingGraph::Edge&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, unsigned long) @ 0x119d6adc in /usr/bin/clickhouse
2021.11.10 16:56:06.387648 [ 12981 ] {} <Fatal> BaseDaemon: 6. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0x119d7209 in /usr/bin/clickhouse
2021.11.10 16:56:06.387672 [ 12981 ] {} <Fatal> BaseDaemon: 7. DB::PipelineExecutor::tryAddProcessorToStackIfUpdated(DB::ExecutingGraph::Edge&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, unsigned long) @ 0x119d6adc in /usr/bin/clickhouse
2021.11.10 16:56:06.387688 [ 12981 ] {} <Fatal> BaseDaemon: 8. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0x119d7209 in /usr/bin/clickhouse
2021.11.10 16:56:06.387706 [ 12981 ] {} <Fatal> BaseDaemon: 9. DB::PipelineExecutor::tryAddProcessorToStackIfUpdated(DB::ExecutingGraph::Edge&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, unsigned long) @ 0x119d6adc in /usr/bin/clickhouse
2021.11.10 16:56:06.387724 [ 12981 ] {} <Fatal> BaseDaemon: 10. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0x119d7209 in /usr/bin/clickhouse
2021.11.10 16:56:06.387739 [ 12981 ] {} <Fatal> BaseDaemon: 11. DB::PipelineExecutor::tryAddProcessorToStackIfUpdated(DB::ExecutingGraph::Edge&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, unsigned long) @ 0x119d6adc in /usr/bin/clickhouse
2021.11.10 16:56:06.387762 [ 12981 ] {} <Fatal> BaseDaemon: 12. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0x119d7209 in /usr/bin/clickhouse
2021.11.10 16:56:06.387779 [ 12981 ] {} <Fatal> BaseDaemon: 13. DB::PipelineExecutor::tryAddProcessorToStackIfUpdated(DB::ExecutingGraph::Edge&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, unsigned long) @ 0x119d6adc in /usr/bin/clickhouse
| https://github.com/ClickHouse/ClickHouse/issues/31251 | https://github.com/ClickHouse/ClickHouse/pull/32327 | 30963dcbc2e97b831677415314bccd5275702884 | 4e6bf2456c8caa3fc718b537d64ae1b44d6e2ab7 | "2021-11-10T15:20:52Z" | c++ | "2021-12-08T11:23:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,181 | ["src/Dictionaries/SSDCacheDictionaryStorage.h", "src/Disks/LocalDirectorySyncGuard.cpp", "src/IO/WriteBufferFromFileDescriptor.cpp", "utils/iotest/iotest.cpp", "utils/iotest/iotest_nonblock.cpp"] | Replace `fsync` to `fdatasync` in all places. | File size is updated correctly according to
> fdatasync() is similar to fsync(), but does not flush modified metadata unless that metadata is needed in order to allow a subsequent data retrieval to be correctly handled. For example, changes to st_atime or st_mtime (respec‐
tively, time of last access and time of last modification; see inode(7)) do not require flushing because they are not necessary for a subsequent data read to be handled correctly. On the other hand, a change to the file size
(st_size, as made by say ftruncate(2)), would require a metadata flush.
And we don't care about modification times.
Should work alright. | https://github.com/ClickHouse/ClickHouse/issues/31181 | https://github.com/ClickHouse/ClickHouse/pull/31229 | f4fda976efa3b793ace0a2b9ede118f7e20f93e7 | cb6342025d202f1cb44dd858419c7fd8e58d6d30 | "2021-11-09T12:51:24Z" | c++ | "2021-11-14T02:16:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,174 | ["src/Access/EnabledQuota.cpp"] | Quota limit was not reached, but the limit was exceeded | ## Describe what's wrong
quota limit was not reached, but the limit was exceeded,Changes from https://github.com/ClickHouse/ClickHouse/pull/20106
## Does it reproduce on recent release?
Version 21.3 or later
## How to reproduce
1、Set the quota of the account
```
CREATE USER IF NOT EXISTS jd_olap IDENTIFIED WITH double_sha1_password BY '123456'
GRANT SHOW, SELECT, INSERT, ALTER, CREATE DATABASE, CREATE TABLE, CREATE VIEW, CREATE DICTIONARY, DROP, TRUNCATE, OPTIMIZE, SYSTEM MERGES, SYSTEM TTL MERGES, SYSTEM FETCHES, SYSTEM MOVES, SYSTEM SENDS, SYSTEM REPLICATION QUEUES, SYSTEM SYNC REPLICA, SYSTEM RESTART REPLICA, SYSTEM FLUSH DISTRIBUTED, dictGet ON jd_olap.* TO jd_olap
CREATE QUOTA jd_olap_10s FOR INTERVAL 10 second MAX queries = 2 TO jd_olap;
```
2、create table
```
CREATE DATABASE IF NOT EXISTS jd_olap on cluster system_cluster;
CREATE TABLE IF NOT EXISTS jd_olap.quota_test_local on cluster system_cluster
(
`user` String,
`max_concurrent_queries` UInt32,
`max_execution_time` UInt32,
`requests_per_minute` UInt32,
`dt` Date
)
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/system_cluster/jdob_ha/jd_olap/quota_test_local/{shard}', '{replica}')
PARTITION BY dt
ORDER BY user
SETTINGS storage_policy = 'jdob_ha', index_granularity = 8192;
```
3、execute the query(insert, select, alter), will trigger the problem

## Expected behavior**
there is a bug where end time is 1970, causing end_loaded.count =0;

the used quota is not reset when it is first calculated (e.g., when a node is restarted or created for the first time) because 1970 is the first end time, the initial used logic is incremented from the first calculated value to the next interval. Used quota values are reset only at the next interval, as must be shown in the following figure: 16:45 and 16:47 are both calculated for the first time on the node restart, and the quota is not reset;
```
2021.11.09 16:49:17.481461 [ 14825 ] {1686DC283041A762} <Information> EnabledQuota: current_time: 2021-11-09 16:49:17, end: 1970-01-01 08:00:00
2021.11.09 16:49:17.481479 [ 14825 ] {1686DC283041A762} <Information> EnabledQuota: need_reset_counters: false, end_load_count: 0, interval.used: 11
```
| https://github.com/ClickHouse/ClickHouse/issues/31174 | https://github.com/ClickHouse/ClickHouse/pull/31656 | 9ede6beca7c276af12b0f008b009771835155769 | 94e2e3625b489e2d975c77037fb47f3270f1e210 | "2021-11-09T09:51:28Z" | c++ | "2021-12-11T08:00:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,172 | ["src/Processors/Transforms/AggregatingInOrderTransform.cpp", "src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp", "src/Processors/Transforms/MergingAggregatedTransform.cpp", "tests/queries/0_stateless/02176_optimize_aggregation_in_order_empty.reference", "tests/queries/0_stateless/02176_optimize_aggregation_in_order_empty.sql", "tests/queries/0_stateless/02177_merge_optimize_aggregation_in_order.reference", "tests/queries/0_stateless/02177_merge_optimize_aggregation_in_order.sql"] | Chunk should have AggregatedChunkInfo in GroupingAggregatedTransform | Found in stress test: https://gist.github.com/alesapin/5cd6b8901488a1f170d5ef389900a534
Report: https://s3.amazonaws.com/clickhouse-test-reports/31158/7fb9a42b119b6c3eba1784caaabde51c1ded3384/stress_tests__debug__actions_.html | https://github.com/ClickHouse/ClickHouse/issues/31172 | https://github.com/ClickHouse/ClickHouse/pull/33637 | caa66a5a09c0af856e1a24b171c98352913faa5b | 6861adadcf8ece69fe597ddbc026311525bd0392 | "2021-11-09T07:41:28Z" | c++ | "2022-01-22T16:05:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,117 | ["src/Disks/DiskLocal.cpp", "src/Disks/HDFS/DiskHDFS.cpp", "src/Disks/HDFS/DiskHDFS.h", "src/Disks/IDiskRemote.cpp", "src/Disks/IDiskRemote.h", "src/Disks/S3/DiskS3.cpp", "src/Disks/S3/DiskS3.h", "src/Disks/S3/registerDiskS3.cpp"] | IDiskRemote should use DiskPtr to handle the file system operations | In class IDiskRemote, all metadata file system operations use os file system api directly.
There is little extendiability if we want store the metadata of HDFS or S3 into distributed file system.
So I suggest change this way by using DiskPtr and will propose a pull request.
In addition, metadata of HDFS or S3 only store local FS which is not sharable.
| https://github.com/ClickHouse/ClickHouse/issues/31117 | https://github.com/ClickHouse/ClickHouse/pull/31136 | f65347308388e82c3283e8e687207b9ede54bd13 | c7be79b4e7efab59312ed4d829ea89152894de87 | "2021-11-06T03:18:40Z" | c++ | "2021-11-13T09:16:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,114 | ["src/Processors/Transforms/WindowTransform.cpp", "src/Processors/Transforms/WindowTransform.h", "tests/queries/0_stateless/02126_lc_window_functions.reference", "tests/queries/0_stateless/02126_lc_window_functions.sql"] | Crash in window function with LowCardinality datatype | **Describe what's wrong**
ClickHouse crashes if you execute `max(id) OVER (PARTITION BY id)` over LowCardinality column
**Does it reproduce on recent release?**
Yes.
ClickHouse version 21.11
**How to reproduce**
```
SELECT max(id) OVER (PARTITION BY id) AS id
FROM
(
SELECT materialize('aaaa') AS id
FROM numbers_mt(1000000)
)
FORMAT `Null`
Ok.
0 rows in set. Elapsed: 0.144 sec. Processed 1.00 million rows, 8.00 MB (6.95 million rows/s., 55.58 MB/s.)
SELECT max(id) OVER (PARTITION BY id) AS aid
FROM
(
SELECT materialize(toLowCardinality('aaaa')) AS id
FROM numbers_mt(1000000)
)
FORMAT `Null`
[ 14776 ] <Fatal> BaseDaemon: ########################################
[ 14776 ] <Fatal> BaseDaemon: (version 21.11.1.8526, build id: 33B572DD4B9C2BE5F65F58755B19729DE40A4869) (from thread 14740) (query_id: d5299144-24c8-4d91-a122-09065f33bbaa) Received signal Segmentation fault (11)
[ 14776 ] <Fatal> BaseDaemon: Address: 0x1 Access: read. Address not mapped to object.
[ 14776 ] <Fatal> BaseDaemon: Stack trace: 0x9b389b0 0x1323afd3 0x1323b45e 0x1324051f 0x13070e9b 0x1306ce51 0x13072e25 0x9b80c57 0x9b8465d 0x7f1fe3050609 0x7f1fe2f4a293
[ 14776 ] <Fatal> BaseDaemon: 2. memcpy @ 0x9b389b0 in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 3. DB::WindowTransform::updateAggregationState() @ 0x1323afd3 in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 4. DB::WindowTransform::appendChunk(DB::Chunk&) @ 0x1323b45e in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 5. DB::WindowTransform::work() @ 0x1324051f in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 6. ? @ 0x13070e9b in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 7. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1306ce51 in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 8. ? @ 0x13072e25 in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 9. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9b80c57 in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 10. ? @ 0x9b8465d in /usr/bin/clickhouse
[ 14776 ] <Fatal> BaseDaemon: 11. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[ 14776 ] <Fatal> BaseDaemon: 12. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
[ 14776 ] <Fatal> BaseDaemon: Calculated checksum of the binary: 8E1781B29E0174AC66B92E409A0E916E. There is no information about the reference checksum.
```
**Expected behavior**
Both queries works
| https://github.com/ClickHouse/ClickHouse/issues/31114 | https://github.com/ClickHouse/ClickHouse/pull/31888 | 900e443900a9e2d343818ae41405c348917fd615 | 71df622b1f0371fc4fd525c7a8a404425c17fcee | "2021-11-05T21:45:24Z" | c++ | "2021-12-12T03:37:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,093 | ["docs/en/sql-reference/table-functions/s3.md", "programs/server/config.xml", "src/Common/Macros.cpp", "src/Common/Macros.h", "src/IO/S3/URI.cpp", "tests/integration/test_s3_style_link/__init__.py", "tests/integration/test_s3_style_link/configs/config.d/minio.xml", "tests/integration/test_s3_style_link/configs/users.d/users.xml", "tests/integration/test_s3_style_link/test.py"] | s3-style URL does not work. | **Describe the issue**
This works successfully:
```
aws s3 cp 'hits.csv' 's3://milovidov-clickhouse-test/hits.csv'
```
This does not:
```
SELECT count() FROM s3('s3://milovidov-clickhouse-test/hits.csv', '...', '...', 'CSV', 'WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, Refresh UInt8, RefererCategoryID UInt16, RefererRegionID UInt32, URLCategoryID UInt16, URLRegionID UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, OriginalURL String, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), LocalEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, RemoteIP UInt32, WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming UInt32, DNSTiming UInt32, ConnectTiming UInt32, ResponseStartTiming UInt32, ResponseEndTiming UInt32, FetchTiming UInt32, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32')
```
It argues about the URL:
```
Code: 36. DB::Exception: Bucket or key name are invalid in S3 URI. (BAD_ARGUMENTS)
```
But I'm using the same URL as with `s3 aws` tool :( | https://github.com/ClickHouse/ClickHouse/issues/31093 | https://github.com/ClickHouse/ClickHouse/pull/54931 | 2ce9251ed89a076caf74b117c2242bd1feb2cb86 | 8f9a227de1f530cdbda52c145d41a6b0f1d29961 | "2021-11-05T03:45:58Z" | c++ | "2023-09-29T04:12:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 31,092 | ["docs/en/sql-reference/functions/url-functions.md", "src/Functions/URL/decodeURLComponent.cpp", "src/Functions/URL/registerFunctionsURL.cpp", "tests/queries/0_stateless/00398_url_functions.reference", "tests/queries/0_stateless/00398_url_functions.sql"] | Is there such function:encodeURLComponent? | there is the function : decodeURLComponent ,why not encodeURLComponent?
| https://github.com/ClickHouse/ClickHouse/issues/31092 | https://github.com/ClickHouse/ClickHouse/pull/34607 | bce6947fb3941bab9581ffea4a3d806dc3660322 | 562f1ec01a13b79d337ac9a05a4e0a49f167b87a | "2021-11-05T01:10:01Z" | c++ | "2022-02-17T16:57:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,980 | ["src/Functions/FunctionsExternalDictionaries.h", "tests/queries/0_stateless/02125_dict_get_type_nullable_fix.reference", "tests/queries/0_stateless/02125_dict_get_type_nullable_fix.sql"] | Fatal Error if use dictGetString with Nullable String | > version 21.9.4.35 (official build) and version 21.10.2.15 (official build)
**If the attribute is nullable, a fatal error occurs when trying to get using dictGetString(). At the same time, dictGet() performs without errors.**
```
2021.11.02 09:09:07.707663 [ 9661 ] {} <Fatal> BaseDaemon: ########################################
2021.11.02 09:09:07.707716 [ 9661 ] {} <Fatal> BaseDaemon: (version 21.9.4.35 (official build), build id: 5F55EEF74E2818F777B4052BF503DF5BA7BFD787) (from thread 245) (query_id: 93bc3f5b-4e86-4a5e-a888-3103860d9c5a) Received signal Segmentation fault (11)
2021.11.02 09:09:07.707752 [ 9661 ] {} <Fatal> BaseDaemon: Address: 0xc380 Access: read. Address not mapped to object.
2021.11.02 09:09:07.707795 [ 9661 ] {} <Fatal> BaseDaemon: Stack trace: 0x937ceb0 0x1186f3bf 0x1186ee00 0x118bdb74 0x93adeb8 0x93afa5f 0x93ab19f 0x93aea83 0x7f4c86cd6609 0x7f4c86bd2293
2021.11.02 09:09:07.707912 [ 9661 ] {} <Fatal> BaseDaemon: 1. void DB::writeAnyEscapedString<(char)39, false>(char const*, char const*, DB::WriteBuffer&) @ 0x937ceb0 in /usr/bin/clickhouse
2021.11.02 09:09:07.708281 [ 9661 ] {} <Fatal> BaseDaemon: 2. DB::IRowOutputFormat::write(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, unsigned long) @ 0x1186f3bf in /usr/bin/clickhouse
2021.11.02 09:09:07.708330 [ 9661 ] {} <Fatal> BaseDaemon: 3. DB::IRowOutputFormat::consume(DB::Chunk) @ 0x1186ee00 in /usr/bin/clickhouse
2021.11.02 09:09:07.708358 [ 9661 ] {} <Fatal> BaseDaemon: 4. DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, std::__1::shared_ptr<DB::ThreadGroupStatus> const&) @ 0x118bdb74 in /usr/bin/clickhouse
2021.11.02 09:09:07.708385 [ 9661 ] {} <Fatal> BaseDaemon: 5. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x93adeb8 in /usr/bin/clickhouse
2021.11.02 09:09:07.708417 [ 9661 ] {} <Fatal> BaseDaemon: 6. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x93afa5f in /usr/bin/clickhouse
2021.11.02 09:09:07.708446 [ 9661 ] {} <Fatal> BaseDaemon: 7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x93ab19f in /usr/bin/clickhouse
2021.11.02 09:09:07.708469 [ 9661 ] {} <Fatal> BaseDaemon: 8. ? @ 0x93aea83 in /usr/bin/clickhouse
2021.11.02 09:09:07.708502 [ 9661 ] {} <Fatal> BaseDaemon: 9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.11.02 09:09:07.708533 [ 9661 ] {} <Fatal> BaseDaemon: 10. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.11.02 09:09:07.824821 [ 9661 ] {} <Fatal> BaseDaemon: Checksum of the binary: BEA07E96B6BEBA1591FE837CF53C7591, integrity check passed.
2021.11.02 09:09:27.757877 [ 93 ] {} <Fatal> Application: Child process was terminated by signal 11.
```
```drop table if exists default.test_table
;
create table if not exists default.test_table
(
id UInt64 default toUInt64(now64()),
any_text Nullable(String)
) engine MergeTree primary key id
;
drop dictionary if exists default.test_dict
;
create dictionary if not exists default.test_dict (id UInt64,
any_text Nullable(String))
primary key id
SOURCE (CLICKHOUSE(HOST 'localhost' PORT 9000 DB 'default' TABLE 'test_table'))
LAYOUT(HASHED)
LIFETIME(min 10 max 20)
;
insert into default.test_table (id,any_text)
values (1635818854,null)
;
select * from default.test_dict
--normal
;
select dictGet('default.test_dict','any_text',toUInt64(1635818854))
-- normal
;
select dictGetString('default.test_dict','any_text',toUInt64(1635818854))
-- error
;
```
* 21.9.4 release and 21.10.2 release
* Webstorm and datagreep
* default> select dictGetString('default.test_dict','any_text',toUInt64(1635818854))
[2021-11-02 09:20:28] Connection refused: connect | https://github.com/ClickHouse/ClickHouse/issues/30980 | https://github.com/ClickHouse/ClickHouse/pull/31800 | 1cda5bfe4e10571b9c4d046a13a9a1d94f031f70 | a426ed0a5af97a195f4c9b03fa6d37caa4a70d3a | "2021-11-02T02:22:56Z" | c++ | "2021-12-02T09:27:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,975 | ["src/AggregateFunctions/AggregateFunctionSumMap.h", "src/DataTypes/DataTypeAggregateFunction.cpp", "src/Parsers/ParserDataType.cpp", "tests/queries/0_stateless/02511_complex_literals_as_aggregate_function_parameters.reference", "tests/queries/0_stateless/02511_complex_literals_as_aggregate_function_parameters.sql"] | Exception thrown when using sumMapFiltered on a distributed table | **Describe what's wrong**
An exception is thrown when trying to use the `sumMapFiltered` function on a distributed table.
**Does it reproduce on recent release?**
Yes. The tests below ran on ClickHouse 21.9.3
**How to reproduce**
Create the test tables and insert a couple rows of data:
```sql
CREATE TABLE local ON CLUSTER 'cluster_name' (
time DateTime,
data Nested(
key Int16,
value Int64
)
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(time)
ORDER BY time
CREATE TABLE distributed ON CLUSTER 'cluster_name' (
time DateTime,
data Nested(
key Int16,
value Int64
)
) ENGINE = Distributed('cluster_name', 'cluster_name', 'local')
INSERT INTO local VALUES
(1635794296, [1, 2, 3], [10, 10, 10]),
(1635794296, [1, 2, 3], [10, 10, 10]),
(1635794296, [1, 2, 3], [10, 10, 10])
```
Then run this query against the distributed table:
```sql
SELECT sumMapFiltered([toInt16(1),toInt16(2)])(data.key, data.value) FROM distributed
```
**Expected behavior**
I would expect the query to return the correct aggregation value, but an error is thrown instead
**Error message and/or stacktrace**
```
Received exception from server (version 21.9.3):
Code: 62. DB::Exception: Received from localhost:9000. DB::Exception: Syntax error (data type): failed at position 34 ('['): [1, 2]), Array(Int16), Array(Int64)). Expected one of: number, nested table, literal, NULL, identifier, data type argument, string literal, name and type pair list, list of elements, data type, list, delimited by binary operators, name and type pair: while receiving packet from [redacted]:9000: While executing Remote. (SYNTAX_ERROR)
```
**Additional context**
The exact same query works as expected when executed against the local table.
| https://github.com/ClickHouse/ClickHouse/issues/30975 | https://github.com/ClickHouse/ClickHouse/pull/44358 | 1b4121459d4a61b7f05609f5c885fedcb19e9248 | ab719f44326268fc7673ef7ea8f2a2dd3eee290b | "2021-11-01T19:34:50Z" | c++ | "2022-12-28T13:38:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,942 | ["src/Common/ErrorCodes.cpp", "src/Parsers/ParserSelectQuery.cpp", "tests/queries/0_stateless/02114_offset_fetch_without_order_by.reference", "tests/queries/0_stateless/02114_offset_fetch_without_order_by.sh"] | Syntax error: failed at position xx ('FETCH'): Expected FETCH | **Describe the unexpected behaviour**
OFFSET can work without ORDER BY , but OFFSET FETCH cannot work without ORDER BY, and throw strange syntax error: `failed at 'FETCH' Expected FETCH`.
**How to reproduce**
* Clickhouse Version: `21.10.2.15`
* `CREATE TABLE test_fetch (a INT, b INT) ENGINE=MergeTree ORDER BY b;`
* `INSERT INTO test_fetch VALUES (1, 1), (2, 1), (3, 4), (1, 4), (5, 4), (0, 6), (5, 7);`
* `OFFSET` without `ORDER BY` works: `SELECT * FROM test_fetch OFFSET 3 ROW;`
* `OFFSET FETCH` with `ORDER BY` works: `SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROWS FETCH FIRST 3 ROWS ONLY;`
* `OFFSET FETCH` without `ORDER BY` does ***NOT*** work: `SELECT * FROM test_fetch OFFSET 3 ROWS FETCH FIRST 3 ROWS ONLY;`
**Expected behavior**
Same behavior or correct error message.
**Error message and/or stacktrace**

| https://github.com/ClickHouse/ClickHouse/issues/30942 | https://github.com/ClickHouse/ClickHouse/pull/31031 | 1c0ee150381dbedbd17c9209a14b90ee94405fe2 | b062c8ca513d74bef2cd07ae06297fadcc37d6a8 | "2021-11-01T03:54:24Z" | c++ | "2021-11-11T08:37:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,919 | ["src/Processors/Formats/Impl/TabSeparatedRowOutputFormat.cpp", "tests/queries/0_stateless/02157_line_as_string_output_format.reference", "tests/queries/0_stateless/02157_line_as_string_output_format.sql"] | FORMAT `LineAsString` should be suitable for output | **Use case**
For symmetry reasons, because it is already supported for input.
**Describe the solution you'd like**
The same as `TSVRaw`. | https://github.com/ClickHouse/ClickHouse/issues/30919 | https://github.com/ClickHouse/ClickHouse/pull/33331 | 3737d83d3e8a47f61d30c21ba01fd2bc79581a87 | 34b934a1e0fe5387d61d5675b9cd3583f1926cb8 | "2021-10-31T15:49:41Z" | c++ | "2021-12-31T11:38:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,918 | ["src/Client/ClientBase.cpp", "src/Formats/FormatFactory.cpp", "src/Formats/FormatFactory.h", "src/Formats/registerFormats.cpp", "src/Parsers/ParserInsertQuery.cpp", "tests/queries/0_stateless/02165_auto_format_by_file_extension.reference", "tests/queries/0_stateless/02165_auto_format_by_file_extension.sh"] | INTO OUTFILE / FROM INFILE: autodetect FORMAT by file extension | **Use case**
This works:
```
SELECT * FROM hits_100m_obfuscated INTO OUTFILE 'hits.csv' FORMAT CSV
```
But I want this:
```
SELECT * FROM hits_100m_obfuscated INTO OUTFILE 'hits.csv'
```
**Describe the solution you'd like**
Every format can register file extension(s) in format factory.
If format is not specified explicitly, we can detect it automatically by file extension.
It should work for compressed files `hits.csv.gz` as well.
The following extensions should be supported at least:
csv, tsv, parquet, orc, native, json, ndjson, xml, md, avro
**Addition details**
Let's also support omitting the format argument for `file`, `url`, `s3`, `hdfs` table functions. | https://github.com/ClickHouse/ClickHouse/issues/30918 | https://github.com/ClickHouse/ClickHouse/pull/33443 | f299c359044e9985bd35fc9875148690f15469d0 | a142b7c55e3a063bd4515f7df91d8794b748c2c9 | "2021-10-31T15:39:59Z" | c++ | "2022-01-12T08:23:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,897 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/integration/test_attach_partition_with_large_destination/__init__.py", "tests/integration/test_attach_partition_with_large_destination/configs/config.xml", "tests/integration/test_attach_partition_with_large_destination/test.py"] | ATTACH PARTITION fails when existing destination table's partition is large | **Unexpected behavior**
ATTACH PARTITION command fails when destination table's existing partition size is large than `max_partition_size_to_drop`
Attaching a tiny sample of few hundred rows from `db.source_table_sample` to `db.destination_table_full` fails because partition `202108` size is (53.46 GB) which is larger than default `max_partition_size_to_drop` (50GB)
Error message:
```sql
ALTER TABLE db.destination_table_full
ATTACH PARTITION 202108 FROM db.source_table_sample
Query id: adf2c8fb-ba8a-4846-9c32-fabdb0d387b9
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 21.7.11):
Code: 359. DB::Exception: Received from localhost:9440. DB::Exception: Table or Partition in db.destination_table_full was not dropped.
Reason:
1. Size (53.46 GB) is greater than max_[table/partition]_size_to_drop (50.00 GB)
2. File '/drives/ssd1/clickhouse/flags/force_drop_table' intended to force DROP doesn't exist
How to fix this:
1. Either increase (or set to zero) max_[table/partition]_size_to_drop in server config
2. Either create forcing file /drives/ssd1/clickhouse/flags/force_drop_table and make sure that ClickHouse has write permission for it.
Example:
sudo touch '/drives/ssd1/clickhouse/flags/force_drop_table' && sudo chmod 666 '/drives/ssd1/clickhouse/flags/force_drop_table'.
```
However, if I try to do the same in a new empty table `db.destination_table_sample`, it can attach partition successfully.
```sql
ALTER TABLE db.destination_table_sample
ATTACH PARTITION 202108 FROM db.source_table_sample
Query id: f1567cf5-b1da-476c-aebb-3cf9f767849e
Ok.
0 rows in set. Elapsed: 0.003 sec.
```
How I calculated partition size:
```sql
SELECT
partition,
formatReadableSize(sum(bytes_on_disk)),
round(((sum(bytes_on_disk) / 1000) / 1000) / 1000, 2) AS GB
FROM system.parts
WHERE table = 'destination_table_full'
GROUP BY partition
Query id: 33862450-3dd5-410e-aee0-30fbee339b68
┌─partition─┬─formatReadableSize(sum(bytes_on_disk))─┬────GB─┐
│ 202107 │ 2.07 GiB │ 2.22 │
│ 202108 │ 49.79 GiB │ 53.46 │
│ 202110 │ 5.85 GiB │ 6.28 │
│ 202109 │ 17.89 GiB │ 19.21 │
└───────────┴────────────────────────────────────────┴───────┘
```
**Expected behavior**
Partitions should be attached successfully regardless of `max_partition_size_to_drop` value
**How to reproduce**
ClickHouse server version: ClickHouse server version 21.7.11.3 (official build).
Which interface to use: clickhouse-client (I originally noticed this behavior in the logs of clickhouse-copier) | https://github.com/ClickHouse/ClickHouse/issues/30897 | https://github.com/ClickHouse/ClickHouse/pull/30995 | d15fc85c37185d3dadc9a0b20f7dfc4d82d2993b | ad81977acebfd39f34921373e23933373226e019 | "2021-10-31T11:35:53Z" | c++ | "2021-11-08T10:07:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,890 | ["base/base/argsToConfig.cpp", "base/base/argsToConfig.h", "src/Client/ClientBase.cpp", "tests/queries/0_stateless/02718_cli_dashed_options_parsing.reference", "tests/queries/0_stateless/02718_cli_dashed_options_parsing.sh"] | Allow to specify settings with dashed-style in addition to underscore_style in command line parameters of clickhouse-client/clickhouse-local. | **Use case**
`clickhouse-client --max-memory-usage 1G` | https://github.com/ClickHouse/ClickHouse/issues/30890 | https://github.com/ClickHouse/ClickHouse/pull/48985 | e78ec28f881a5a450dded2fcf4b83fdeb9b04a01 | 9772e8e2214f076a254fc162d1e8ead8a8888b80 | "2021-10-30T21:55:15Z" | c++ | "2023-04-23T02:04:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,874 | ["docs/en/sql-reference/statements/alter/column.md", "docs/ru/sql-reference/statements/alter/column.md", "src/Parsers/ParserAlterQuery.cpp", "src/Parsers/ParserCreateQuery.h", "tests/queries/0_stateless/02126_alter_table_alter_column.reference", "tests/queries/0_stateless/02126_alter_table_alter_column.sql"] | Support PostgreSQL style ALTER MODIFY COLUMN | **Use case**
PostgreSQL is using `ALTER TABLE t ALTER COLUMN c TYPE type` instead of `ALTER TABLE t MODIFY COLUMN c type`.
There is a risk that some people already get used to PostgreSQL syntax.
Let's implement it in ClickHouse.
```
tutorial=# ALTER TABLE hits_100m_obfuscated MODIFY COLUMN UserAgentMinor TEXT
tutorial-# ;
ERROR: syntax error at or near "MODIFY"
LINE 1: ALTER TABLE hits_100m_obfuscated MODIFY COLUMN UserAgentMino...
^
tutorial=# ALTER TABLE hits_100m_obfuscated ALTER COLUMN UserAgentMinor TYPE TEXT
;
ALTER TABLE
tutorial=#
```
**Proposed implementation**
Implement it in parser level. | https://github.com/ClickHouse/ClickHouse/issues/30874 | https://github.com/ClickHouse/ClickHouse/pull/32003 | 96ec92c7cd127b08453a83d2aca23f2ec26c67df | 1f9b542ee906349af69a2439bad4c14947b8ce17 | "2021-10-29T22:45:49Z" | c++ | "2021-12-01T01:10:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,873 | ["src/Client/ClientBase.cpp", "src/Client/ClientBase.h"] | Looks like parallel formatting is not activated. | **Describe the situation**
```
SELECT * FROM test.hits INTO OUTFILE 'dump.csv' FORMAT CSV
8873898 rows in set. Elapsed: 32.487 sec. Processed 8.87 million rows, 8.46 GB (273.15 thousand rows/s., 260.42 MB/s.)
```
is bounded by single CPU core. Too slow :( | https://github.com/ClickHouse/ClickHouse/issues/30873 | https://github.com/ClickHouse/ClickHouse/pull/30886 | 2b28e87a7f9076779fdaa9e98052173f681ba1b5 | 6b2dc88dd60afb3925973195d7d2b02533c721a5 | "2021-10-29T22:10:10Z" | c++ | "2021-10-31T09:27:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,872 | ["src/Client/ClientBase.cpp", "src/Client/ClientBase.h"] | Progress bar is flickering when saving result to file. | If you execute this query in interactive mode of `clickhouse-client`:
```
SELECT WatchID::Int64, JavaEnable, toValidUTF8(Title), GoodEvent, EventTime, EventDate, CounterID::Int32, ClientIP::Int32, RegionID::Int32, UserID::Int64, CounterClass, OS, UserAgent, toValidUTF8(URL), toValidUTF8(Referer), Refresh, RefererCategoryID::Int16, RefererRegionID::Int32, URLCategoryID::Int16, URLRegionID::Int32, ResolutionWidth::Int16, ResolutionHeight::Int16, ResolutionDepth, FlashMajor, FlashMinor, FlashMinor2, NetMajor, NetMinor, UserAgentMajor::Int16, UserAgentMinor, CookieEnable, JavascriptEnable, IsMobile, MobilePhone, toValidUTF8(MobilePhoneModel), toValidUTF8(Params), IPNetworkID::Int32, TraficSourceID, SearchEngineID::Int16, toValidUTF8(SearchPhrase), AdvEngineID, IsArtifical, WindowClientWidth::Int16, WindowClientHeight::Int16, ClientTimeZone, ClientEventTime, SilverlightVersion1, SilverlightVersion2, SilverlightVersion3::Int32, SilverlightVersion4::Int16, toValidUTF8(PageCharset), CodeVersion::Int32, IsLink, IsDownload, IsNotBounce, FUniqID::Int64, toValidUTF8(OriginalURL), HID::Int32, IsOldCounter, IsEvent, IsParameter, DontCountHits, WithHash, HitColor, LocalEventTime, Age, Sex, Income, Interests::Int16, Robotness, RemoteIP::Int32, WindowName, OpenerName, HistoryLength, BrowserLanguage, BrowserCountry, toValidUTF8(SocialNetwork), toValidUTF8(SocialAction), HTTPError, SendTiming, DNSTiming, ConnectTiming, ResponseStartTiming, ResponseEndTiming, FetchTiming, SocialSourceNetworkID, toValidUTF8(SocialSourcePage), ParamPrice, toValidUTF8(ParamOrderID), ParamCurrency, ParamCurrencyID::Int16, OpenstatServiceName, OpenstatCampaignID, OpenstatAdID, OpenstatSourceID, UTMSource, UTMMedium, UTMCampaign, UTMContent, UTMTerm, FromTag, HasGCLID, RefererHash::Int64, URLHash::Int64, CLID::Int32
FROM hits_100m_obfuscated
INTO OUTFILE 'dump.csv'
FORMAT CSV
```
progress bar is displaying only occasionally and instantly cleared. | https://github.com/ClickHouse/ClickHouse/issues/30872 | https://github.com/ClickHouse/ClickHouse/pull/30886 | 2b28e87a7f9076779fdaa9e98052173f681ba1b5 | 6b2dc88dd60afb3925973195d7d2b02533c721a5 | "2021-10-29T22:05:53Z" | c++ | "2021-10-31T09:27:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,870 | ["src/IO/ReadHelpers.cpp", "src/IO/ReadHelpers.h", "tests/queries/0_stateless/01073_bad_alter_partition.reference", "tests/queries/0_stateless/01073_bad_alter_partition.sql", "tests/queries/0_stateless/02112_parse_date_yyyymmdd.reference", "tests/queries/0_stateless/02112_parse_date_yyyymmdd.sh"] | Cannot parse `Date` in form of YYYYMMDD from CSV. | ```
$ echo '20210101,A' | clickhouse-local --input-format CSV --structure 'd Date, s String' --query "SELECT * FROM table"
Code: 27. DB::Exception: Cannot parse input: expected ',' before: '20210101,A\n':
Row 1:
Column 0, name: d, type: Date, ERROR: text "20210101,A" is not like Date
: While executing ParallelParsingBlockInputFormat: While executing File. (CANNOT_PARSE_INPUT_ASSERTION_FAILED)
```
**How to solve**
Disambiguate while parsing by absense of separators between YYYY-MM-DD. | https://github.com/ClickHouse/ClickHouse/issues/30870 | https://github.com/ClickHouse/ClickHouse/pull/30871 | 2e3ff53725611f512c5a7eb88a78e492dfc57b23 | 9adff8a2b82b6052bbb4e42e83e9ee7c04caed99 | "2021-10-29T20:04:58Z" | c++ | "2021-10-30T18:10:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,831 | ["src/Storages/LiveView/StorageLiveView.cpp"] | LIVE VIEWS are underusing available CPU | **Describe the situation**
What exactly works slower than expected?
Seems that LIVE VIEWS are underusing the available CPU.
**How to reproduce**
I'm using a GCP vm with 4 cpus, 32 GB ram and 727 GB disk.
Next code creates a table called 'test' and inserts random values into it.
It will insert 1 billion rows as transactions between 21-09-01 and 21-09-10, with random values in amount column.
Then you have the selects to compare the performace of a live view that groups the amount transactioned each day, with the performance of a 'pure select' (same as the LIVE VIEW definition).
If you make an htop during the simple select you could see all cores working together, but at the first execution of select to the LIVE VIEW you will see cores with no use and a decrease on the performace of the query compared to the orginal select.
If you use more data or more complex querys with agg columns it turns worse.
```
-- create table test
CREATE TABLE test (
id Int64,
timestamp DateTime,
amount Decimal(4,2)
) ENGINE = MergeTree()
ORDER BY timestamp AS
SELECT
*,
toDateTime((modulo(rand(),10)/10)*(toUnixTimestamp('2021-09-10 00:00:00')-toUnixTimestamp('2021-09-01 00:00:00'))+toUnixTimestamp('2021-09-01 00:00:00')),
toDecimal32(((modulo(rand(),100)/100)*(2)+(-1))*1000,2) as amount
FROM numbers(0, 1000000000)
-- just select and see htop
SELECT
toStartOfDay(timestamp),
MAX(amount) AS amount
FROM test AS mc
GROUP BY
toStartOfDay(timestamp)
ORDER BY toStartOfDay(timestamp) DESC
-- create live view
CREATE LIVE VIEW test_lv WITH REFRESH 120
AS SELECT
toStartOfDay(timestamp) as timestamp,
MAX(amount) AS amount
FROM test AS mc
GROUP BY
timestamp
ORDER BY timestamp DESC;
-- select and execute for the first time the live view (see htop)
SELECT * FROM test_lv ORDER BY timestamp DESC LIMIT 10;
```
**Which ClickHouse server version to use**
version 21.9.4 revision 54449
**Which interface to use, if matters**
clickhouse-client version 21.9.4.35 (official build)
**Expected performance**
I was expecting that live view executes over all the hardware available, with a performance comparable with the 'pure select' ones.
**Additional context**
I'm attaching some screenshots where you can check performance issue cleary
pure select performance

pure select cores

live view performance

live view cores

| https://github.com/ClickHouse/ClickHouse/issues/30831 | https://github.com/ClickHouse/ClickHouse/pull/31006 | 710fbebafc8b7aee751c0b94de02956bfda3dfc4 | 15cd3dc30780822a3a966f805e10a2e88b1c0b08 | "2021-10-29T01:43:58Z" | c++ | "2021-11-16T09:43:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,809 | ["src/Dictionaries/getDictionaryConfigurationFromAST.cpp", "tests/queries/0_stateless/02162_range_hashed_dictionary_ddl_expression.reference", "tests/queries/0_stateless/02162_range_hashed_dictionary_ddl_expression.sql"] | Dictionary EXPRESSION doesn't work for range boundary columns | **Describe the unexpected behaviour**
It's not possible to calculate min and max values for range dictionary via EXPRESSION
**How to reproduce**
ClickHouse version 21.11
```
CREATE TABLE test_range_dict
(
`id` UInt32,
`date` Date,
`start` Date,
`end` Date
)
ENGINE = Memory
CREATE DICTIONARY IF NOT EXISTS test_range_dict_d
(
`id` UInt64,
`date` Date,
`start` Date,
`end` Date,
`start_date` Date EXPRESSION date - toIntervalDay(1),
`end_date` Date EXPRESSION any(date) OVER (PARTITION BY id ORDER BY date ASC Rows BETWEEN 1 PRECEDING AND 1 PRECEDING)
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(TABLE 'test_range_dict'))
LIFETIME(MIN 0 MAX 120)
LAYOUT(RANGE_HASHED())
RANGE(MIN start MAX end)
Ok.
0 rows in set. Elapsed: 0.006 sec.
SELECT *
FROM test_range_dict_d
Ok.
0 rows in set. Elapsed: 0.011 sec.
DROP DICTIONARY test_range_dict_d;
CREATE DICTIONARY IF NOT EXISTS test_range_dict_d
(
`id` UInt64,
`date` Date,
`start` Date,
`end` Date,
`start_date` Date EXPRESSION date - toIntervalDay(1),
`end_date` Date EXPRESSION any(date) OVER (PARTITION BY id ORDER BY date ASC Rows BETWEEN 1 PRECEDING AND 1 PRECEDING)
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(TABLE 'test_range_dict'))
LIFETIME(MIN 0 MAX 120)
LAYOUT(RANGE_HASHED())
RANGE(MIN start_date MAX end_date)
Ok.
0 rows in set. Elapsed: 0.008 sec.
SELECT *
FROM test_range_dict_d
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 21.11.1):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'end_date' 'start_date' while processing query: 'SELECT id, start_date, end_date, date, start, end FROM default.test_range_dict', required columns: 'start_date' 'id' 'end' 'end_date' 'start' 'date', maybe you meant: ['id','end','start','date']. (UNKNOWN_IDENTIFIER)
```
**Expected behavior**
Dictionary works
| https://github.com/ClickHouse/ClickHouse/issues/30809 | https://github.com/ClickHouse/ClickHouse/pull/33478 | 4beaf7398a4bb1d3baa7db8df2b6389454f0f81b | b46ce6b4a96766e3da379c8752ee17b28c1979cd | "2021-10-28T14:10:50Z" | c++ | "2022-01-09T14:57:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,802 | ["docker/test/fasttest/run.sh", "docker/test/stateful/run.sh", "docker/test/stateless/run.sh", "tests/clickhouse-test"] | Failed tests are retried without need | https://github.com/ClickHouse/ClickHouse/pull/29856 introduced a check to detect if the ZK session had died or expired during a test run and retry, but the logic is flawed.
If you don't have a connection to ZK open (because you aren't using replication) the `zookeeperSessionUptime()` function always returns 0, which makes a failed test being run always 3 times for no reason. This is a PITA when working on adding new features / tests since you end up waiting 3 times for the result.
| https://github.com/ClickHouse/ClickHouse/issues/30802 | https://github.com/ClickHouse/ClickHouse/pull/30847 | 75a4556067cca85c233818ee1e1d8877d579223d | 059c1ebf36fec35a7294211975342c9a22d38383 | "2021-10-28T12:21:12Z" | c++ | "2021-10-29T17:18:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,784 | ["src/Storages/StorageBuffer.cpp", "src/Storages/StorageDistributed.cpp", "src/Storages/StorageMerge.cpp", "tests/queries/0_stateless/02111_modify_table_comment.reference", "tests/queries/0_stateless/02111_modify_table_comment.sql"] | change the table comment after created the table is not supported now? | Change the table comment after created the table is not supported now?
Like mysql : ALTER TABLE [tableName] COMMNET [comment];
The sql "ALTER TABLE `system`.tables UPDATE Comment='test' WHERE NAME='t1';" also not supported!!!
My version is '21.9.4.35'。 | https://github.com/ClickHouse/ClickHouse/issues/30784 | https://github.com/ClickHouse/ClickHouse/pull/30852 | eb3e461f96f6ed2ef092f6c9bcb740bb4b93cf92 | f652c8ce24579274eba57908731f54df2b38373c | "2021-10-28T03:52:09Z" | c++ | "2021-10-31T19:12:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,755 | ["src/Interpreters/InterpreterCreateFunctionQuery.cpp", "tests/queries/0_stateless/02148_sql_user_defined_function_subquery.reference", "tests/queries/0_stateless/02148_sql_user_defined_function_subquery.sql"] | Accept query statement at CREATE FUNCTION feature | Clickhouse Version: 21.10.2.15
After a quick test, I couldn't make a function using a query as a statement
**Use case**
It is useful to have some prepared queries that could work as functions
**Describe the solution you'd like**
```sql
CREATE FUNCTION plus_tutu as () -> ((select `orderYM` from `bucket_18`.`flattable` limit 1))
```
| https://github.com/ClickHouse/ClickHouse/issues/30755 | https://github.com/ClickHouse/ClickHouse/pull/32758 | ec46cbef20545f932e2f57d8f1348fe2a6502678 | 655cc205254f993ffe266f61fcef25da6091e698 | "2021-10-27T14:10:56Z" | c++ | "2021-12-15T12:57:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,679 | ["tests/queries/0_stateless/02510_group_by_prewhere_null.reference", "tests/queries/0_stateless/02510_group_by_prewhere_null.sql"] | Error in executing 'group by' with 'prewhere' | Steps to reproduce:
```
create table table1 (
col1 Int32,
col2 Int32
)
ENGINE = MergeTree
partition by tuple()
order by col1;
```
```
with :id as pid
select a.col1, sum(a.col2) as summ
from table1 a
prewhere (pid is null or a.col2 = pid)
group by a.col1;
```
When `null` is passed as `:id` parameter, the query above produces error:
```
Code: 215, e.displayText() = DB::Exception: Column `col2` is not under aggregate function and not in GROUP BY: While processing col2 (version 21.8.3.44 (official build))
```
It doesn't matter, if the table is distributed or not.
If you replace `prewhere` with `where` or pass non-null value or remove `is null` part of the condition, the query executes without error.
This reproducible on 21.8.3.44 and above. | https://github.com/ClickHouse/ClickHouse/issues/30679 | https://github.com/ClickHouse/ClickHouse/pull/44357 | b5431e971e4ad8485e2b2bcfa45f15d9d84d808e | 8d23d2f2f28bbccec309205d77f32d1388f78e03 | "2021-10-26T09:13:59Z" | c++ | "2022-12-27T11:46:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,672 | ["programs/install/Install.cpp"] | Minor usability improvement of install script | **Use case**
It prints:
```
ClickHouse has been successfully installed.
Start clickhouse-server with:
sudo clickhouse start
```
But if clickhouse-server is already running, let's print:
```
ClickHouse has been successfully installed.
Restart clickhouse-server with:
sudo clickhouse restart
``` | https://github.com/ClickHouse/ClickHouse/issues/30672 | https://github.com/ClickHouse/ClickHouse/pull/30830 | 94039ace63c1f47e338e4d12f4c9f8e17b7bf0a3 | fe4d134b1f3b3111ccb09c131c5fbe999b67dab2 | "2021-10-26T05:36:40Z" | c++ | "2021-10-31T11:33:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,642 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/Core/SettingsChangesHistory.h", "src/Interpreters/TreeRewriter.cpp", "tests/queries/0_stateless/02554_rewrite_count_distinct_if_with_count_distinct_implementation.reference", "tests/queries/0_stateless/02554_rewrite_count_distinct_if_with_count_distinct_implementation.sql"] | countDistinctIf with Date use much more memory than UInt16 | **Describe the situation**
countDistinctIf with Date datatype use more memory than UInt16
**How to reproduce**
ClickHouse version 21.11
```
SELECT countDistinctIf(materialize(toUInt16(today())), 1)
FROM numbers_mt(100000000)
GROUP BY number % 1000000
FORMAT `Null`
Peak memory usage (for query): 1.51 GiB.
0 rows in set. Elapsed: 2.465 sec. Processed 100.00 million rows, 800.00 MB (40.57 million rows/s., 324.57 MB/s.)
SELECT countDistinctIf(materialize(today()), 1)
FROM numbers_mt(100000000)
GROUP BY number % 1000000
FORMAT `Null`
Peak memory usage (for query): 4.51 GiB.
0 rows in set. Elapsed: 4.455 sec. Processed 100.00 million rows, 800.00 MB (22.44 million rows/s., 179.56 MB/s.)
```
**Expected performance**
Speed and memory usage should be the same as for UInt16
| https://github.com/ClickHouse/ClickHouse/issues/30642 | https://github.com/ClickHouse/ClickHouse/pull/46051 | f4040236c60f397f1004694f4cef0a5e7709dfc0 | 7886e06217bf1800560fc5febae3aac80e048eba | "2021-10-25T11:17:00Z" | c++ | "2023-08-06T16:31:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,559 | ["src/IO/S3Common.cpp"] | Excessive logging for s3: Failed to find credentials process's profile | When selecting from S3 table with `use_environment_credentials=true` CH writes lots of messages like this
```
<Error> AWSClient: ProcessCredentialsProvider: Failed to find credential process's profile: default
```
I've dug in the CH and AWS SDK codebase a little, and it seems like AWS Client is supposed to cache credentials obtained using instance metastore in local filesystem and then use them till the expiration date comes, but that doesn't happen for some reason, hence this error.
I've also tried to save credentials obtained from metastore in `/nonexistent/.aws/` (clickhouse user home directory) manually, and these messages were gone before CH made another request to metastore and made credentials saved in a file obsolete.
**Does it reproduce on recent release?**
Yes, tested with 21.10.2.15
**How to reproduce**
* Which ClickHouse server version to use
* 21.10.2.15
* Which interface to use, if matters
* Doesn't matter
* Non-default settings, if any
```
<use_environment_credentials>true</use_environment_credentials>
```
* `CREATE TABLE` statements for all tables involved
```
CREATE TABLE s3_table (
json String
)
ENGINE = S3(
'https://s3.amazonaws.com/bucket/prefix/*.gz',
'JSONAsString'
);
```
* Queries to run that lead to unexpected result
```
SELECT * FROM s3_table LIMIT 1000000;
```
**Expected behavior**
Not to have lots of error messages in logs
**Error message and/or stacktrace**
```
<Error> AWSClient: ProcessCredentialsProvider: Failed to find credential process's profile: default
```
| https://github.com/ClickHouse/ClickHouse/issues/30559 | https://github.com/ClickHouse/ClickHouse/pull/35434 | c244ee7cbb61fea384679e18a577ff579060288b | 8aab8b491fa155a91ecff5f659bf9dac964bc684 | "2021-10-22T11:00:19Z" | c++ | "2022-03-20T14:13:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,546 | ["src/Columns/ColumnMap.cpp", "tests/queries/0_stateless/02124_buffer_with_type_map_long.reference", "tests/queries/0_stateless/02124_buffer_with_type_map_long.sh"] | Server crash when selecting from system.query_log if it is using Buffer table as a backend | **Describe what's wrong**
Server crashes when selecting from system query.log
**Does it reproduce on recent release?**
Yes
**How to reproduce**
* Which ClickHouse server version to use
I used latest 21.10 version (518c2c1bfed1660ca610719e57dca5f49fba9011), probably it reproduces on master too.
* Non-default settings, if any
I used following config for query_log:
```xml
<query_log>
<database>system</database>
<table>query_log</table>
<engine>ENGINE = Buffer('', '', 1, 1, 1, 1000000000000, 1000000000000, 1000000000000, 1000000000000)</engine>
<flush_interval_milliseconds>100</flush_interval_milliseconds>
</query_log>
```
* Queries to run that lead to unexpected result
```bash
#! /bin/bash
# Drop system.query_log to make sure that it is up to date with config.
curl 'localhost:8123' -d 'drop table system.query_log'
# It usually takes ~3000 iterations before crushing on my server.
for i in {1..10000}
do
curl 'localhost:8123' -d 'select * from system.query_log' 2> /dev/null > /dev/null
status=$?
echo "Iteration: $i; Status: $status"
if [[ "$status" != "0" ]]
then
break
fi
done
```
**Error message and/or stacktrace**
```
2021.10.22 10:04:09.907246 [ 3794 ] {66884da9-bca0-4448-b326-2f7edec9a2ba} <Debug> executeQuery: (from [::1]:41068) select * from system.query_log
2021.10.22 10:04:09.908571 [ 3794 ] {66884da9-bca0-4448-b326-2f7edec9a2ba} <Trace> ContextAccess (default): Access granted: SELECT(type, event_date, event_time, event_time_microseconds, query_start_time, query_start_time_microseconds, query_duration_ms, read_rows, read_bytes, written_rows, written_bytes, result_rows, result_bytes, memory_usage, current_database, query, formatted_query, normalized_query_hash, query_kind, databases, tables, columns, projections, views, exception_code, exception, stack_trace, is_initial_query, user, query_id, address, port, initial_user, initial_query_id, initial_address, initial_port, initial_query_start_time, initial_query_start_time_microseconds, interface, os_user, client_hostname, client_name, client_revision, client_version_major, client_version_minor, client_version_patch, http_method, http_user_agent, http_referer, forwarded_for, quota_key, revision, log_comment, thread_ids, ProfileEvents, Settings, used_aggregate_functions, used_aggregate_function_combinators, used_database_engines, used_data_type_families, used_dictionaries, used_formats, used_functions, used_storages, used_table_functions) ON system.query_log
2021.10.22 10:04:09.908971 [ 3794 ] {66884da9-bca0-4448-b326-2f7edec9a2ba} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.10.22 10:04:09.910518 [ 3794 ] {66884da9-bca0-4448-b326-2f7edec9a2ba} <Trace> ParallelFormattingOutputFormat: Parallel formatting is being used
2021.10.22 10:04:09.911209 [ 3955 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 9 entries to flush up to offset 5403
2021.10.22 10:04:09.913899 [ 3791 ] {} <Trace> BaseDaemon: Received signal 11
2021.10.22 10:04:09.913985 [ 3955 ] {} <Trace> SystemLog (system.query_log): Flushed system log up to offset 5403
2021.10.22 10:04:09.914224 [ 9608 ] {} <Fatal> BaseDaemon: ########################################
2021.10.22 10:04:09.914303 [ 9608 ] {} <Fatal> BaseDaemon: (version 21.10.3.1, build id: 645A68B9CBFECFFC) (from thread 3963) (query_id: 66884da9-bca0-4448-b326-2f7edec9a2ba) Received signal Segmentation fault (11)
2021.10.22 10:04:09.914344 [ 9608 ] {} <Fatal> BaseDaemon: Address: 0x10 Access: read. Address not mapped to object.
2021.10.22 10:04:09.914369 [ 9608 ] {} <Fatal> BaseDaemon: Stack trace: 0xfb6d7d9 0x10af3a71 0x10af34c2 0x10b317a7 0x93c186d 0x93c2ce8 0x93bfdf0 0x93c20b3 0x7f6d887db6db 0x7f6d880f871f
2021.10.22 10:04:09.923138 [ 9608 ] {} <Fatal> BaseDaemon: 3.1. inlined from /home/dakovalkov/ClickHouse/build/../contrib/libcxx/include/vector:1559: std::__1::vector<COW<DB::IColumn>::chameleon_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::chameleon_ptr<DB::IColumn> > >::operator[](unsigned long) const
2021.10.22 10:04:09.923180 [ 9608 ] {} <Fatal> BaseDaemon: 3.2. inlined from ../src/Columns/ColumnTuple.h:99: DB::ColumnTuple::getColumn(unsigned long) const
2021.10.22 10:04:09.923237 [ 9608 ] {} <Fatal> BaseDaemon: 3.3. inlined from ../src/DataTypes/Serializations/SerializationMap.cpp:106: void DB::SerializationMap::serializeTextImpl<DB::SerializationMap::serializeText(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const::$_0&, DB::SerializationMap::serializeText(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const::$_0&>(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::SerializationMap::serializeText(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const::$_0&, DB::SerializationMap::serializeText(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const::$_0&) const
2021.10.22 10:04:09.923272 [ 9608 ] {} <Fatal> BaseDaemon: 3. ../src/DataTypes/Serializations/SerializationMap.cpp:175: DB::SerializationMap::serializeText(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const @ 0xfb6d7d9 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.933703 [ 9608 ] {} <Fatal> BaseDaemon: 4. /home/dakovalkov/ClickHouse/build/../src/Processors/Formats/IRowOutputFormat.cpp:90: DB::IRowOutputFormat::write(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, unsigned long) @ 0x10af3a71 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.942135 [ 9608 ] {} <Fatal> BaseDaemon: 5.1. inlined from /home/dakovalkov/ClickHouse/build/../contrib/libcxx/include/functional:2236: std::__1::__function::__policy_func<void (std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, unsigned long)>::operator bool() const
2021.10.22 10:04:09.942203 [ 9608 ] {} <Fatal> BaseDaemon: 5.2. inlined from ../contrib/libcxx/include/functional:2412: std::__1::function<void (std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, unsigned long)>::operator bool() const
2021.10.22 10:04:09.942219 [ 9608 ] {} <Fatal> BaseDaemon: 5. ../src/Processors/Formats/IRowOutputFormat.cpp:37: DB::IRowOutputFormat::consume(DB::Chunk) @ 0x10af34c2 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.953967 [ 9608 ] {} <Fatal> BaseDaemon: 6.1. inlined from /home/dakovalkov/ClickHouse/build/../contrib/libcxx/include/memory:3211: ~shared_ptr
2021.10.22 10:04:09.954002 [ 9608 ] {} <Fatal> BaseDaemon: 6.2. inlined from ../src/Processors/Chunk.h:32: ~Chunk
2021.10.22 10:04:09.954032 [ 9608 ] {} <Fatal> BaseDaemon: 6. ../src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp:179: DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, std::__1::shared_ptr<DB::ThreadGroupStatus> const&) @ 0x10b317a7 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.963998 [ 9608 ] {} <Fatal> BaseDaemon: 7.1. inlined from /home/dakovalkov/ClickHouse/build/../contrib/libcxx/include/functional:2210: std::__1::__function::__policy_func<void ()>::operator=(std::nullptr_t)
2021.10.22 10:04:09.964048 [ 9608 ] {} <Fatal> BaseDaemon: 7.2. inlined from ../contrib/libcxx/include/functional:2533: std::__1::function<void ()>::operator=(std::nullptr_t)
2021.10.22 10:04:09.964080 [ 9608 ] {} <Fatal> BaseDaemon: 7. ../src/Common/ThreadPool.cpp:273: ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x93c186d in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.974907 [ 9608 ] {} <Fatal> BaseDaemon: 8. /home/dakovalkov/ClickHouse/build/../src/Common/ThreadPool.cpp:0: ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x93c2ce8 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.984050 [ 9608 ] {} <Fatal> BaseDaemon: 9.1. inlined from /home/dakovalkov/ClickHouse/build/../contrib/libcxx/include/functional:2210: std::__1::__function::__policy_func<void ()>::operator=(std::nullptr_t)
2021.10.22 10:04:09.984085 [ 9608 ] {} <Fatal> BaseDaemon: 9.2. inlined from ../contrib/libcxx/include/functional:2533: std::__1::function<void ()>::operator=(std::nullptr_t)
2021.10.22 10:04:09.984109 [ 9608 ] {} <Fatal> BaseDaemon: 9. ../src/Common/ThreadPool.cpp:273: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x93bfdf0 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.994285 [ 9608 ] {} <Fatal> BaseDaemon: 10.1. inlined from /home/dakovalkov/ClickHouse/build/../contrib/libcxx/include/memory:1655: std::__1::unique_ptr<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>, std::__1::default_delete<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> > >::reset(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>*)
2021.10.22 10:04:09.994323 [ 9608 ] {} <Fatal> BaseDaemon: 10.2. inlined from ../contrib/libcxx/include/memory:1612: ~unique_ptr
2021.10.22 10:04:09.994344 [ 9608 ] {} <Fatal> BaseDaemon: 10. ../contrib/libcxx/include/thread:293: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x93c20b3 in /home/dakovalkov/ClickHouse/build/programs/clickhouse
2021.10.22 10:04:09.994396 [ 9608 ] {} <Fatal> BaseDaemon: 11. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
2021.10.22 10:04:09.994501 [ 9608 ] {} <Fatal> BaseDaemon: 12. /build/glibc-S9d2JN/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: clone @ 0x12171f in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
```
| https://github.com/ClickHouse/ClickHouse/issues/30546 | https://github.com/ClickHouse/ClickHouse/pull/31742 | b3ac7d04f28f4b880e7726162c3607ba6eced00b | 423d497b27ca5a2389d097b3b18dae7acd9d3349 | "2021-10-22T07:09:00Z" | c++ | "2021-11-25T01:56:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,545 | ["src/Functions/now.cpp", "src/Functions/now64.cpp", "tests/queries/0_stateless/02100_now64_types_bug.reference", "tests/queries/0_stateless/02100_now64_types_bug.sql"] | Logical error found by fuzzer: cannot capture columns in now64 | ```
SELECT x
FROM
(
SELECT if((number % NULL) = -2147483648, NULL, if(toInt64(toInt64(now64(if((number % NULL) = -2147483648, NULL, if(toInt64(now64(toInt64(9223372036854775807, now64(h3kRing(NULL, NULL))), h3kRing(NULL, NULL))) = (number % NULL), nan, toFloat64(number))), toInt64(9223372036854775807, toInt64(9223372036854775807, now64(h3kRing(NULL, NULL))), now64(h3kRing(NULL, NULL))), h3kRing(NULL, NULL))), now64(toInt64(9223372036854775807, toInt64(0, now64(h3kRing(NULL, NULL))), now64(h3kRing(NULL, NULL))), h3kRing(NULL, NULL))) = (number % NULL), nan, toFloat64(number))) AS x
FROM system.numbers
LIMIT 3
)
ORDER BY x DESC NULLS LAST
Query id: d38af640-03bb-47ec-9ad0-144373bf6ac4
0 rows in set. Elapsed: 0.011 sec.
Received exception from server (version 21.11.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Cannot capture 3 columns because function now64 has 0 arguments.: while executing 'FUNCTION now64(if(equals(modulo(number, NULL), -2147483648), NULL, if(equals(toInt64(now64(toInt64(9223372036854775807, now64(h3kRing(NULL, NULL))), h3kRing(NULL, NULL))), modulo(number, NULL)), nan, toFloat64(number))) :: 2, toInt64(9223372036854775807, toInt64(9223372036854775807, now64(h3kRing(NULL, NULL))), now64(h3kRing(NULL, NULL))) :: 6, h3kRing(NULL, NULL) :: 3) -> now64(if(equals(modulo(number, NULL), -2147483648), NULL, if(equals(toInt64(now64(toInt64(9223372036854775807, now64(h3kRing(NULL, NULL))), h3kRing(NULL, NULL))), modulo(number, NULL)), nan, toFloat64(number))), toInt64(9223372036854775807, toInt64(9223372036854775807, now64(h3kRing(NULL, NULL))), now64(h3kRing(NULL, NULL))), h3kRing(NULL, NULL)) Nullable(Nothing) : 4'. (LOGICAL_ERROR)
``` | https://github.com/ClickHouse/ClickHouse/issues/30545 | https://github.com/ClickHouse/ClickHouse/pull/30639 | 329437abcab7dc66ec6ef54ad451d39ffebef33d | 36c3b1d5b1cb20f9c9401f12b902a818674a1f3c | "2021-10-22T04:17:13Z" | c++ | "2021-10-26T13:29:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,544 | ["src/Interpreters/UserDefinedSQLFunctionVisitor.cpp", "src/Parsers/ASTFunction.cpp", "tests/queries/0_stateless/02103_sql_user_defined_functions_composition.reference", "tests/queries/0_stateless/02103_sql_user_defined_functions_composition.sql"] | Server does not start after creating unary UDF | ```
2021.10.22 07:14:33.963099 [ 3451059 ] {} <Error> Application: DB::Exception: Syntax error (in file ./user_defined/function_a_function.sql): failed at position 39 ('->'): -> ch(2)
$ cat ./user_defined/function_a_function.sql
CREATE FUNCTION a_function AS tuple() -> ch(2)
``` | https://github.com/ClickHouse/ClickHouse/issues/30544 | https://github.com/ClickHouse/ClickHouse/pull/30483 | 1467a59f91eb435c2d12c98f94252a49c978fa4e | 6561f9bed21c6c68743eb6dc0cf95dccddff8d9b | "2021-10-22T04:15:53Z" | c++ | "2021-10-22T07:16:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,461 | ["src/Interpreters/MutationsInterpreter.cpp", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/TreeRewriter.h", "tests/queries/0_stateless/02100_alter_scalar_circular_deadlock.reference", "tests/queries/0_stateless/02100_alter_scalar_circular_deadlock.sql"] | Deadlock on ALTER with scalar subquery to the same table | ```
DROP TABLE IF EXISTS foo;
CREATE TABLE foo (timestamp DateTime, x UInt64)
ENGINE = MergeTree PARTITION BY toYYYYMMDD(timestamp)
ORDER BY (timestamp);
INSERT INTO foo (timestamp, x) SELECT toDateTime('2020-01-01 00:05:00'), number from system.numbers_mt LIMIT 100;
SELECT count() FROM system.mutations; -- ok
ALTER TABLE foo UPDATE x = 1 WHERE x = (SELECT x from foo WHERE x = 1);
SELECT count() FROM system.mutations; -- stuck
``` | https://github.com/ClickHouse/ClickHouse/issues/30461 | https://github.com/ClickHouse/ClickHouse/pull/30492 | 84a29cfe9d7c25c53789a9304665b03df6a04678 | 410624749e4c85157a74162d79f583720e227bb4 | "2021-10-20T15:13:23Z" | c++ | "2021-10-23T09:01:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,438 | ["src/Processors/QueryPlan/LimitStep.cpp", "tests/queries/0_stateless/02100_limit_push_down_bug.reference", "tests/queries/0_stateless/02100_limit_push_down_bug.sql"] | Cannot find column `length(x)` in source stream | **Describe what's wrong**
Correct query doesn't work.
**Does it reproduce on recent release?**
Yes,
ClickHouse version 21.11
```
CREATE TABLE tbl_repr(
ts DateTime,
x String)
ENGINE=MergeTree ORDER BY ts;
SELECT *
FROM
(
SELECT
x,
length(x)
FROM tbl_repr
WHERE ts > now()
LIMIT 1
)
WHERE x != ''
Query id: 083cdb34-500b-4b43-b096-9c8140165a65
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 21.11.1):
Code: 8. DB::Exception: Received from localhost:9000. DB::Exception: Cannot find column `length(x)` in source stream, there are only columns: [x]. (THERE_IS_NO_COLUMN)
```
**Expected behavior**
Query works
**Error message and/or stacktrace**
Code: 8. DB::Exception: Received from localhost:9000. DB::Exception: Cannot find column `length(x)` in source stream, there are only columns: [x]. (THERE_IS_NO_COLUMN)
| https://github.com/ClickHouse/ClickHouse/issues/30438 | https://github.com/ClickHouse/ClickHouse/pull/30562 | 2ecfe7068a7d9192db7dd30209e67f120a727204 | 8c2413f6fe087b51c178888a5648d900105f24a3 | "2021-10-20T10:14:18Z" | c++ | "2021-10-23T21:19:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,400 | ["src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp", "tests/queries/0_stateless/00652_replicated_mutations_zookeeper.sh"] | DB::Exception: Part 4_1_22_1_2 intersects next part 4_5_10_1. It is a bug or a result of manual intervention in the ZooKeeper data. (LOGICAL_ERROR) | https://clickhouse-test-reports.s3.yandex.net/0/0b3926950d235e6a4198fc2c98c0fe63fd42652e/functional_stateless_tests_(release,_databaseordinary).html
```
2021.10.19 20:35:30.910387 [ 380 ] {} <Fatal> BaseDaemon: (version 21.11.1.8485 (official build), build id: CE4E236BB9251889E0F112381AFB7DD37D9C55DF) (from thread 556) Terminate called for uncaught exception:
2021.10.19 20:35:30.910556 [ 380 ] {} <Fatal> BaseDaemon: Code: 49. DB::Exception: Part 4_1_22_1_2 intersects next part 4_5_10_1. It is a bug or a result of manual intervention in the ZooKeeper data. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
2021.10.19 20:35:30.910916 [ 380 ] {} <Fatal> BaseDaemon:
2021.10.19 20:35:30.911000 [ 380 ] {} <Fatal> BaseDaemon: 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9b34d94 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911022 [ 380 ] {} <Fatal> BaseDaemon: 1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0xbce8a3e in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911154 [ 380 ] {} <Fatal> BaseDaemon: 2. DB::ActiveDataPartSet::add(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*) @ 0x12bafbda in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911243 [ 380 ] {} <Fatal> BaseDaemon: 3. DB::ReplicatedMergeTreeQueue::addPartToMutations(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x12e3508e in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911257 [ 380 ] {} <Fatal> BaseDaemon: 4. DB::ReplicatedMergeTreeQueue::insertUnlocked(std::__1::shared_ptr<DB::ReplicatedMergeTreeLogEntry> const&, std::__1::optional<long>&, std::__1::lock_guard<std::__1::mutex>&) @ 0x12e3395a in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911364 [ 380 ] {} <Fatal> BaseDaemon: 5. DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>, std::__1::function<void (Coordination::WatchResponse const&)>, DB::ReplicatedMergeTreeQueue::PullLogsReason) @ 0x12e3d49c in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911536 [ 380 ] {} <Fatal> BaseDaemon: 6. DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0x129e212d in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911735 [ 380 ] {} <Fatal> BaseDaemon: 7. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x11e134ae in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911820 [ 380 ] {} <Fatal> BaseDaemon: 8. DB::BackgroundSchedulePool::threadFunction() @ 0x11e15e87 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911888 [ 380 ] {} <Fatal> BaseDaemon: 9. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x11e16df3 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.911905 [ 380 ] {} <Fatal> BaseDaemon: 10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9b76ef7 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.912044 [ 380 ] {} <Fatal> BaseDaemon: 11. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x9b7a8fd in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.912088 [ 380 ] {} <Fatal> BaseDaemon: 12. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.10.19 20:35:30.912198 [ 380 ] {} <Fatal> BaseDaemon: 13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.912297 [ 380 ] {} <Fatal> BaseDaemon: (version 21.11.1.8485 (official build))
2021.10.19 20:35:30.912561 [ 15687 ] {} <Fatal> BaseDaemon: ########################################
2021.10.19 20:35:30.912647 [ 15687 ] {} <Fatal> BaseDaemon: (version 21.11.1.8485 (official build), build id: CE4E236BB9251889E0F112381AFB7DD37D9C55DF) (from thread 556) (no query) Received signal Aborted (6)
2021.10.19 20:35:30.912744 [ 15687 ] {} <Fatal> BaseDaemon:
2021.10.19 20:35:30.912767 [ 15687 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f4ef453a18b 0x7f4ef4519859 0x11732a9c 0x17e4dd83 0x17e4dcec 0x12e3f777 0x129e212d 0x11e134ae 0x11e15e87 0x11e16df3 0x9b76ef7 0x9b7a8fd 0x7f4ef471c609 0x7f4ef4616293
2021.10.19 20:35:30.912891 [ 15687 ] {} <Fatal> BaseDaemon: 2. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.912919 [ 15687 ] {} <Fatal> BaseDaemon: 3. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.912963 [ 15687 ] {} <Fatal> BaseDaemon: 4. terminate_handler() @ 0x11732a9c in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913041 [ 15687 ] {} <Fatal> BaseDaemon: 5. std::__terminate(void (*)()) @ 0x17e4dd83 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913064 [ 15687 ] {} <Fatal> BaseDaemon: 6. std::terminate() @ 0x17e4dcec in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913381 [ 15687 ] {} <Fatal> BaseDaemon: 7. DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>, std::__1::function<void (Coordination::WatchResponse const&)>, DB::ReplicatedMergeTreeQueue::PullLogsReason) @ 0x12e3f777 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913556 [ 15687 ] {} <Fatal> BaseDaemon: 8. DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0x129e212d in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913577 [ 15687 ] {} <Fatal> BaseDaemon: 9. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x11e134ae in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913597 [ 15687 ] {} <Fatal> BaseDaemon: 10. DB::BackgroundSchedulePool::threadFunction() @ 0x11e15e87 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913739 [ 15687 ] {} <Fatal> BaseDaemon: 11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x11e16df3 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913831 [ 15687 ] {} <Fatal> BaseDaemon: 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9b76ef7 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913863 [ 15687 ] {} <Fatal> BaseDaemon: 13. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x9b7a8fd in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.913963 [ 15687 ] {} <Fatal> BaseDaemon: 14. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.10.19 20:35:30.914034 [ 15687 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.950862 [ 380 ] {} <Fatal> BaseDaemon: (version 21.11.1.8485 (official build), build id: CE4E236BB9251889E0F112381AFB7DD37D9C55DF) (from thread 587) Terminate called for uncaught exception:
2021.10.19 20:35:30.951017 [ 380 ] {} <Fatal> BaseDaemon: Code: 49. DB::Exception: Part 4_1_22_1_2 intersects next part 4_5_10_1. It is a bug or a result of manual intervention in the ZooKeeper data. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
2021.10.19 20:35:30.951108 [ 380 ] {} <Fatal> BaseDaemon:
2021.10.19 20:35:30.951131 [ 380 ] {} <Fatal> BaseDaemon: 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9b34d94 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951209 [ 380 ] {} <Fatal> BaseDaemon: 1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0xbce8a3e in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951240 [ 380 ] {} <Fatal> BaseDaemon: 2. DB::ActiveDataPartSet::add(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*) @ 0x12bafbda in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951259 [ 380 ] {} <Fatal> BaseDaemon: 3. DB::ReplicatedMergeTreeQueue::addPartToMutations(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x12e3508e in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951338 [ 380 ] {} <Fatal> BaseDaemon: 4. DB::ReplicatedMergeTreeQueue::insertUnlocked(std::__1::shared_ptr<DB::ReplicatedMergeTreeLogEntry> const&, std::__1::optional<long>&, std::__1::lock_guard<std::__1::mutex>&) @ 0x12e3395a in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951432 [ 380 ] {} <Fatal> BaseDaemon: 5. DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>, std::__1::function<void (Coordination::WatchResponse const&)>, DB::ReplicatedMergeTreeQueue::PullLogsReason) @ 0x12e3d49c in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951454 [ 380 ] {} <Fatal> BaseDaemon: 6. DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0x129e212d in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951595 [ 380 ] {} <Fatal> BaseDaemon: 7. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x11e134ae in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951617 [ 380 ] {} <Fatal> BaseDaemon: 8. DB::BackgroundSchedulePool::threadFunction() @ 0x11e15e87 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951642 [ 380 ] {} <Fatal> BaseDaemon: 9. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x11e16df3 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951718 [ 380 ] {} <Fatal> BaseDaemon: 10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9b76ef7 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951804 [ 380 ] {} <Fatal> BaseDaemon: 11. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x9b7a8fd in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.951864 [ 380 ] {} <Fatal> BaseDaemon: 12. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.10.19 20:35:30.951954 [ 380 ] {} <Fatal> BaseDaemon: 13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.952080 [ 380 ] {} <Fatal> BaseDaemon: (version 21.11.1.8485 (official build))
2021.10.19 20:35:30.953165 [ 15688 ] {} <Fatal> BaseDaemon: ########################################
2021.10.19 20:35:30.953235 [ 15688 ] {} <Fatal> BaseDaemon: (version 21.11.1.8485 (official build), build id: CE4E236BB9251889E0F112381AFB7DD37D9C55DF) (from thread 587) (no query) Received signal Aborted (6)
2021.10.19 20:35:30.953307 [ 15688 ] {} <Fatal> BaseDaemon:
2021.10.19 20:35:30.953416 [ 15688 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f4ef453a18b 0x7f4ef4519859 0x11732a9c 0x17e4dd83 0x17e4dcec 0x12e3f777 0x129e212d 0x11e134ae 0x11e15e87 0x11e16df3 0x9b76ef7 0x9b7a8fd 0x7f4ef471c609 0x7f4ef4616293
2021.10.19 20:35:30.953576 [ 15688 ] {} <Fatal> BaseDaemon: 2. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.953791 [ 15688 ] {} <Fatal> BaseDaemon: 3. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:30.953859 [ 15688 ] {} <Fatal> BaseDaemon: 4. terminate_handler() @ 0x11732a9c in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.953948 [ 15688 ] {} <Fatal> BaseDaemon: 5. std::__terminate(void (*)()) @ 0x17e4dd83 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.953970 [ 15688 ] {} <Fatal> BaseDaemon: 6. std::terminate() @ 0x17e4dcec in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.954415 [ 15688 ] {} <Fatal> BaseDaemon: 7. DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>, std::__1::function<void (Coordination::WatchResponse const&)>, DB::ReplicatedMergeTreeQueue::PullLogsReason) @ 0x12e3f777 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.954503 [ 15688 ] {} <Fatal> BaseDaemon: 8. DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0x129e212d in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.954639 [ 15688 ] {} <Fatal> BaseDaemon: 9. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x11e134ae in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.954665 [ 15688 ] {} <Fatal> BaseDaemon: 10. DB::BackgroundSchedulePool::threadFunction() @ 0x11e15e87 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.954903 [ 15688 ] {} <Fatal> BaseDaemon: 11. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x11e16df3 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.955105 [ 15688 ] {} <Fatal> BaseDaemon: 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9b76ef7 in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.955321 [ 15688 ] {} <Fatal> BaseDaemon: 13. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x9b7a8fd in /usr/lib/debug/.build-id/ce/4e236bb9251889e0f112381afb7dd37d9c55df.debug
2021.10.19 20:35:30.955462 [ 15688 ] {} <Fatal> BaseDaemon: 14. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.10.19 20:35:30.955496 [ 15688 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.19 20:35:31.059059 [ 15687 ] {} <Fatal> BaseDaemon: Checksum of the binary: BC42F91F74264701AD7AD8F570AF2CD8, integrity check passed.
2021.10.19 20:35:31.103664 [ 15688 ] {} <Fatal> BaseDaemon: Checksum of the binary: BC42F91F74264701AD7AD8F570AF2CD8, integrity check passed.
2021.10.19 20:35:51.441042 [ 378 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
| https://github.com/ClickHouse/ClickHouse/issues/30400 | https://github.com/ClickHouse/ClickHouse/pull/30651 | 8749f4a31a5a3ad6322736ea2a0d96f0007017ea | a29711f1d075f389bbb0306742cd8ffcc5a4f1ba | "2021-10-19T17:19:10Z" | c++ | "2021-10-27T07:52:21Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.