status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,794 | ["docs/en/getting-started/example-datasets/index.md", "docs/en/getting-started/example-datasets/opensky.md"] | Example dataset: air traffic data from OpenSky network. | https://zenodo.org/record/5092942#.YP5WAzpRXYc | https://github.com/ClickHouse/ClickHouse/issues/26794 | https://github.com/ClickHouse/ClickHouse/pull/27437 | a090bfc494d98bae4d60f27c00aa24f48486bd99 | 67d2a35d4ec885f36c24c683bb4f50616be300b9 | "2021-07-26T06:31:31Z" | c++ | "2021-08-08T21:55:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,672 | ["src/DataTypes/DataTypeEnum.h", "src/Storages/StorageInMemoryMetadata.cpp", "tests/queries/0_stateless/02012_changed_enum_type_non_replicated.reference", "tests/queries/0_stateless/02012_changed_enum_type_non_replicated.sql", "tests/queries/0_stateless/02012_zookeeper_changed_enum_type.reference", "tests/queries/0_stateless/02012_zookeeper_changed_enum_type.sql", "tests/queries/0_stateless/02012_zookeeper_changed_enum_type_incompatible.reference", "tests/queries/0_stateless/02012_zookeeper_changed_enum_type_incompatible.sql", "tests/queries/skip_list.json"] | attaching parts with 'compatible' enum types | Got something like that in practice with backup recovery - IRL most probably the sequence of events leading to that discrepancy was different (not clear yet).
```sql
drop table enum_alter_issue;
create table enum_alter_issue (a Enum8('one' = 1, 'two' = 2)) engine = ReplicatedMergeTree('/clickhouse/tables/enum_alter_issue', 'test') ORDER BY a;
-- backup / move part aside or similar operation
insert into enum_alter_issue values ('one'), ('two');
alter table enum_alter_issue detach partition id 'all';
-- schema evolution (enum get extended)
alter table enum_alter_issue modify column a Enum8('one' = 1, 'two' = 2, 'three' = 3);
insert into enum_alter_issue values ('one'), ('two');
alter table enum_alter_issue detach partition id 'all';
-- attempt to attach back all the parts (new generation and old generation)
alter table enum_alter_issue attach partition id 'all';
Received exception from server (version 21.9.1):
Code: 53. DB::Exception: Received from localhost:9000. DB::Exception: Type mismatch for column a. Column has type Enum8('one' = 1, 'two' = 2, 'three' = 3), got type Enum8('one' = 1, 'two' = 2).
``` | https://github.com/ClickHouse/ClickHouse/issues/26672 | https://github.com/ClickHouse/ClickHouse/pull/28028 | b8296416f7f918ca67f526d306a0e3e7eebdbdce | bcaff654577b6c8248a44ed1731fefaa63ee1566 | "2021-07-21T13:48:17Z" | c++ | "2021-08-26T10:50:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,640 | ["src/Storages/Kafka/StorageKafka.cpp"] | Change the kafka consumers max value from 16 to physical cpu cores | (you don't have to strictly follow this form)
**Use case**
Our Machine is 96 cores, 256 GB, 10T ssd.
We want to use the Kafka Engine table and distribute table only for writing data to other worker nodes .
**Describe the solution you'd like**
One Kafka Engine Node can consumer more data, and reduce the other ch node work load.
other worker node can get more performance.

| https://github.com/ClickHouse/ClickHouse/issues/26640 | https://github.com/ClickHouse/ClickHouse/pull/26642 | 5ffd99dfd43472e77fe2738f18d5ba9f1933db73 | 6230ad016009da740ea29a046d795b37497fcd96 | "2021-07-21T02:37:26Z" | c++ | "2021-07-22T08:54:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,591 | ["src/Interpreters/inplaceBlockConversions.cpp", "tests/queries/0_stateless/02000_default_from_default_empty_column.reference", "tests/queries/0_stateless/02000_default_from_default_empty_column.sql"] | A non-materialized column cannot be retrieved if its default expression depends on another non-materialized column | I get an exception when I query a column that tries to calculate its default from a column that does not exist either.
Here is a reproducible example:
```
create table test (col Int8) engine=MergeTree order by tuple();
insert into test values (1);
alter table test add column s1 String;
alter table test add column s2 String default s1;
-- requesting both columns
select * from test;
col | s1 | s2
----+----+---
1 | |
-- requesting only one column
select s2 from test;
DB::Exception: Missing columns: 's1' while processing query: 'CAST(s1, 'String') AS s2',
required columns: 's1' 's1': (while reading from part /var/lib/clickhouse/data/dw/test/all_1_1_0/):
While executing MergeTree (version 21.7.3.14 (official build))
```
The same thing happens with a merge.
I set up 30 day TTL for two arrays, 'names' and 'values'.
MATERIALIZE TTL cleaned the data.
When a merge for an old partition (20210418) got scheduled it could not proceed, because one of the recently added columns (`v3h_fixed`) had its default value pointing to the 'names' and 'values' columns: `CAST(values[indexOf(names, '3hFixed')], 'String')`
Here is the exception:
Code: 47, e.displayText() = DB::Exception: Missing columns: 'names' 'values' while processing query: 'CAST(values[indexOf(names, '3hFixed')], 'String') AS v3h_fixed', required columns: 'values' 'names' 'values' 'names': (while reading from part /var/lib/clickhouse/data/db/table/20210418_238_238_0_248/): While executing MergeTreeSequentialSource: Cannot fetch required block. Stream PipelineExecuting, part 2 (version 21.7.3.14 (official build))
The merge was able to proceed and completed successfully after I removed the default expression from the `v3h_fixed` column.
I could not reproduce this on a test table.
Weird.
```
[ 7095 ] {} <Error> db.table: auto DB::StorageReplicatedMergeTree::processQueueEntry(ReplicatedMergeTreeQueue::SelectedEntryPtr)::(anonymous class)::operator()
(DB::StorageReplicatedMergeTree::LogEntryPtr &) const: Code: 47, e.displayText() = DB::Exception:
Missing columns: 'names' 'values' while processing query: 'CAST(values[indexOf(names, '3hFixed')], 'String') AS v3h_fixed',
required columns: 'values' 'names' 'values' 'names': (while reading from part /var/lib/clickhouse/data/db/table/20210418_238_238_0_248/):
While executing MergeTreeSequentialSource: Cannot fetch required block. Stream PipelineExecuting, part 2,
Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >
const&, int, bool) @ 0x8d31b5a in /usr/bin/clickhouse
1. DB::TreeRewriterResult::collectUsedColumns(std::__1::shared_ptr<DB::IAST> const&, bool) @ 0xfdb2470 in /usr/bi
n/clickhouse
2. DB::TreeRewriter::analyze(std::__1::shared_ptr<DB::IAST>&, DB::NamesAndTypesList const&, std::__1::shared_ptr<
DB::IStorage const>, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool) const @ 0xfdbb40
c in /usr/bin/clickhouse
3. ? @ 0xfe370e9 in /usr/bin/clickhouse
4. DB::evaluateMissingDefaults(DB::Block const&, DB::NamesAndTypesList const&, DB::ColumnsDescription const&, std
::__1::shared_ptr<DB::Context const>, bool, bool) @ 0xfe379c9 in /usr/bin/clickhouse
5. DB::IMergeTreeReader::evaluateMissingDefaults(DB::Block, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::
IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x104bf3ed in /usr/bin/clickho
use
6. DB::MergeTreeSequentialSource::generate() @ 0x104d098b in /usr/bin/clickhouse
7. DB::ISource::tryGenerate() @ 0x106dfe35 in /usr/bin/clickhouse
8. DB::ISource::work() @ 0x106dfa1a in /usr/bin/clickhouse
9. DB::SourceWithProgress::work() @ 0x108b098a in /usr/bin/clickhouse
10. ? @ 0x1071a4dd in /usr/bin/clickhouse
11. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x10717071 in
/usr/bin/clickhouse
12. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x10715a9c in /usr/bin/clickhouse
13. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x1072308a in /usr/bin/clickhouse
14. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x10723290 in /usr/bin/clickhouse
15. DB::PipelineExecutingBlockInputStream::readImpl() @ 0x10711f14 in /usr/bin/clickhouse
16. DB::IBlockInputStream::read() @ 0xf51c6a4 in /usr/bin/clickhouse
17. DB::ColumnGathererStream::fetchNewBlock(DB::ColumnGathererStream::Source&, unsigned long) @ 0xfe72412 in /usr
/bin/clickhouse
18. void DB::ColumnGathererStream::gather<DB::ColumnString>(DB::ColumnString&) @ 0xff06ef1 in /usr/bin/clickhouse
19. DB::ColumnGathererStream::readImpl() @ 0xfe72244 in /usr/bin/clickhouse
20. DB::IBlockInputStream::read() @ 0xf51c6a4 in /usr/bin/clickhouse
21. DB::MergeTreeDataMergerMutator::mergePartsToTemporaryPart(DB::FutureMergedMutatedPart const&, std::__1::share
d_ptr<DB::StorageInMemoryMetadata const> const&, DB::BackgroundProcessListEntry<DB::MergeListElement, DB::MergeIn
fo>&, std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>&, long, std::__1::shared_ptr<DB::Context const>, std::
__1::unique_ptr<DB::IReservation, std::__1::default_delete<DB::IReservation> > const&, bool, std::__1::vector<std
::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1:
:basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::MergeTreeData::Mergi
ngParams const&, DB::IMergeTreeDataPart const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__
1::allocator<char> > const&) @ 0x103eeb62 in /usr/bin/clickhouse
22. DB::StorageReplicatedMergeTree::tryExecuteMerge(DB::ReplicatedMergeTreeLogEntry const&) @ 0x10123346 in /usr/
bin/clickhouse
23. DB::StorageReplicatedMergeTree::executeLogEntry(DB::ReplicatedMergeTreeLogEntry&) @ 0x10115054 in /usr/bin/cl
ickhouse
24. ? @ 0x101c9c3f in /usr/bin/clickhouse
25. DB::ReplicatedMergeTreeQueue::processEntry(std::__1::function<std::__1::shared_ptr<zkutil::ZooKeeper> ()>, st
d::__1::shared_ptr<DB::ReplicatedMergeTreeLogEntry>&, std::__1::function<bool (std::__1::shared_ptr<DB::Replicate
dMergeTreeLogEntry>&)>) @ 0x1055e38c in /usr/bin/clickhouse
26. DB::StorageReplicatedMergeTree::processQueueEntry(std::__1::shared_ptr<DB::ReplicatedMergeTreeQueue::Selected
Entry>) @ 0x1015e49d in /usr/bin/clickhouse
27. ? @ 0x103130b7 in /usr/bin/clickhouse
28. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8d75
738 in /usr/bin/clickhouse
29. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std:
:__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<Thread
FromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda
0'()&&...)::'lambda'()::operator()() @ 0x8d772df in /usr/bin/clickhouse
30. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8d72a1f in /
usr/bin/clickhouse
31. ? @ 0x8d76303 in /usr/bin/clickhouse
(version 21.7.3.14 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/26591 | https://github.com/ClickHouse/ClickHouse/pull/26900 | ea0514a95518a7646c5ccc1b6e1e075e5f66d6a7 | 26b129bef21f05cb9a425ae2802a1856bf040601 | "2021-07-20T18:27:19Z" | c++ | "2021-07-28T19:40:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,522 | ["src/Processors/QueryPlan/ReadFromMergeTree.cpp", "tests/queries/0_stateless/02002_sampling_and_unknown_column_bug.reference", "tests/queries/0_stateless/02002_sampling_and_unknown_column_bug.sql"] | 21.7+ Cannot find column in source stream in a query with sampling | 21.7.3 throws an exception if `_sample_factor` is used in a query:
```
CREATE TABLE sessions
(
`user_id` UInt64
)
ENGINE = MergeTree
ORDER BY user_id
SAMPLE BY user_id;
insert into sessions values(1);
SELECT
sum(user_id * _sample_factor)
FROM sessions
SAMPLE 10000000
DB::Exception: Cannot find column `user_id` in source stream (version 21.8.1.7409 (official build))
SELECT
sum(user_id)
FROM sessions
SAMPLE 10000000
sum(user_id)
------------
1
```
No exception on 21.6.5 | https://github.com/ClickHouse/ClickHouse/issues/26522 | https://github.com/ClickHouse/ClickHouse/pull/27301 | fad30f6949728ca128f13df3b8c643efff379e81 | fa5bcb2e3053cb4e6afb7c0eeb54e561cd4a97df | "2021-07-19T17:05:25Z" | c++ | "2021-08-07T15:39:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,511 | ["src/DataStreams/RemoteBlockInputStream.cpp", "src/DataStreams/RemoteBlockInputStream.h", "tests/queries/0_stateless/01956_skip_unavailable_shards_excessive_attempts.reference", "tests/queries/0_stateless/01956_skip_unavailable_shards_excessive_attempts.sh"] | skip_unavailable_shards=1 makes 6 tries if a shard is unavailable | ```
set skip_unavailable_shards=1;
select hostname(), * from cluster('{cluster}', system.one);
Connection failed at try №1,
Connection failed at try №2,
Connection failed at try №3,
Connection failed at try №1,
Connection failed at try №2,
Connection failed at try №3,
```
expected 3 try. | https://github.com/ClickHouse/ClickHouse/issues/26511 | https://github.com/ClickHouse/ClickHouse/pull/26658 | b4c3796dc6f4ae88d98b56d504ee431d73415922 | f9581893d85202fe1fc8b408b08517d5b92af7f7 | "2021-07-19T15:20:07Z" | c++ | "2021-07-21T23:12:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,503 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/StorageMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "tests/integration/test_cleanup_after_start/__init__.py", "tests/integration/test_cleanup_after_start/test.py"] | Remove 'delete_tmp_' directories on server start-up | For some reasons when we delete a MergeTree Part in atomic way something may go wrong - and leave the `delete_tmp_*` folder or it's part on disk. This may lead to a disk space consumption over time.
It's suggested to delete this folders at least at server start-up. Current solution with `clearOldTemporaryDirectories()` doesn't work because this method is called periodically from a thread without any locks or advisory on folders - it may cause the race conditions. | https://github.com/ClickHouse/ClickHouse/issues/26503 | https://github.com/ClickHouse/ClickHouse/pull/37906 | 6211a1c390035c3c1ac5f45815fb02bcbb8aa103 | 9fdc783eacdfae7fbf224fa2bda4e982aaeb5c13 | "2021-07-19T13:26:28Z" | c++ | "2022-06-08T10:35:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,491 | ["src/Interpreters/ExternalDictionariesLoader.cpp", "src/Storages/StorageDictionary.cpp", "tests/queries/0_stateless/01948_dictionary_quoted_database_name.reference", "tests/queries/0_stateless/01948_dictionary_quoted_database_name.sql"] | Look like dictionary `SOURCE(CLICKHOUSE(...))` doesn't work after 21.6+ | **Does it reproduce on recent release?**
yes, 21.6, 21.7 from Docker reproducible
**Describe the bug**
```
CREATE DATABASE `_test.ДБ_atomic_` ENGINE=Atomic;
CREATE TABLE `_test.ДБ_atomic_`.table4
(
`id` UInt64,
`Col1` String,
`Col2` String,
`Col3` String,
`Col4` String,
`Col5` String
)
ENGINE = MergeTree
PARTITION BY id
ORDER BY (id, Col1, Col2, Col3, Col4, Col5)
SETTINGS index_granularity = 8192;
CREATE DICTIONARY `_test.ДБ_atomic_`.dict_example
(
`id` UInt64,
`Col1` String,
`Col2` String,
`Col3` String,
`Col4` String,
`Col5` String
)
PRIMARY KEY id
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 DATABASE '_test.ДБ_atomic_' TABLE 'table4' USER 'default' PASSWORD ''))
LIFETIME(MIN 0 MAX 60)
LAYOUT(HASHED());
INSERT INTO `_test.ДБ_atomic_`.table4 SELECT number, 'Col1','Col2','Col3','Col4','Col5' FROM numbers(100);
SELECT count() FROM `_test.ДБ_atomic_`.table4;
SYSTEM RELOAD DICTIONARIES;
SELECT * FROM system.dictionaries FORMAT Vertical;
```
# returns
```
Row 1:
──────
database: _test.ДБ_atomic_
name: dict_example
uuid: 5f7e0f50-345a-497a-9f7e-0f50345aa97a
status: NOT_LOADED
origin: 5f7e0f50-345a-497a-9f7e-0f50345aa97a
type:
key.names: ['id']
key.types: ['UInt64']
attribute.names: ['Col1','Col2','Col3','Col4','Col5']
attribute.types: ['String','String','String','String','String']
bytes_allocated: 0
query_count: 0
hit_rate: 0
found_rate: 0
element_count: 0
load_factor: 0
source:
lifetime_min: 0
lifetime_max: 0
loading_start_time: 1970-01-01 00:00:00
last_successful_update_time: 1970-01-01 00:00:00
loading_duration: 0
last_exception:
```
* Queries to run that lead to unexpected result
`SELECT count() FROM `_test.ДБ_atomic_`.dict_example;` returns
**Expected behavior**
100
**Error message and/or stacktrace**
```
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Dictionary (`_test.ДБ_atomic_.dict_example`) not found.
```
**Stacktrace**
```
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8d31b5a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0x8e48943 in /usr/bin/clickhouse
2. DB::ExternalDictionariesLoader::resolveDictionaryName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf839c52 in /usr/bin/clickhouse
3. DB::ExternalDictionariesLoader::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context const>) const @ 0xf839937 in /usr/bin/clickhouse
4. DB::StorageDictionary::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0x100007dd in /usr/bin/clickhouse
5. DB::IStorage::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xffe43f7 in /usr/bin/clickhouse
6. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0xfaf63f5 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0xfaedcbe in /usr/bin/clickhouse
8. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0xfaecd60 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0xfc6215e in /usr/bin/clickhouse
10. DB::InterpreterSelectWithUnionQuery::execute() @ 0xfc63231 in /usr/bin/clickhouse
11. ? @ 0xfe22253 in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0xfe208e3 in /usr/bin/clickhouse
13. DB::TCPHandler::runImpl() @ 0x1069f6c2 in /usr/bin/clickhouse
14. DB::TCPHandler::run() @ 0x106b25d9 in /usr/bin/clickhouse
15. Poco::Net::TCPServerConnection::start() @ 0x1338b30f in /usr/bin/clickhouse
16. Poco::Net::TCPServerDispatcher::run() @ 0x1338cd9a in /usr/bin/clickhouse
17. Poco::PooledThread::run() @ 0x134bfc19 in /usr/bin/clickhouse
18. Poco::ThreadImpl::runnableEntry(void*) @ 0x134bbeaa in /usr/bin/clickhouse
19. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
20. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
``` | https://github.com/ClickHouse/ClickHouse/issues/26491 | https://github.com/ClickHouse/ClickHouse/pull/26508 | d1eeb37cacc199fede617051152929f9580c87ea | 348a3abb0b14db8db105b0d551d0fc188e77ae6f | "2021-07-19T08:13:44Z" | c++ | "2021-07-20T16:35:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,467 | ["src/Core/Settings.h", "src/Core/SettingsChangesHistory.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/MergeTree/MergeTreeSettings.h", "src/Storages/StorageMergeTree.cpp", "tests/queries/0_stateless/02710_allow_suspicious_indices.reference", "tests/queries/0_stateless/02710_allow_suspicious_indices.sql"] | Add a setting `allow_suspicious_indices`. | Not sure is UB or expected B.
```sql
CREATE TABLE test
(
`a` Int64
)
ENGINE = MergeTree
PRIMARY KEY a
ORDER BY (a, a);
ALTER TABLE test
ADD COLUMN `b` Int64, MODIFY ORDER BY (a, a, b, b, b);
SHOW CREATE TABLE test;
CREATE TABLE test
(
`a` Int64,
`b` Int64
)
ENGINE = MergeTree
PRIMARY KEY a
ORDER BY (a, a, b, b, b)
SETTINGS index_granularity = 8192
```
| https://github.com/ClickHouse/ClickHouse/issues/26467 | https://github.com/ClickHouse/ClickHouse/pull/48536 | 259d23303f63d17faa99723ddba9d9ba53ae5b9a | 6ba6e24a5a18d0bb521f3081b4f1b807506675b0 | "2021-07-17T14:27:05Z" | c++ | "2023-04-24T20:27:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,361 | ["docs/en/engines/database-engines/materialized-mysql.md", "src/Common/mysqlxx/mysqlxx/Types.h", "src/Core/MySQL/MySQLReplication.cpp", "src/DataTypes/DataTypeString.cpp", "src/DataTypes/DataTypesNumber.cpp", "src/Databases/MySQL/MaterializedMySQLSyncThread.cpp", "src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp", "src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp", "src/Processors/Sources/MySQLSource.cpp", "tests/integration/test_materialized_mysql_database/materialize_with_ddl.py"] | MaterializeMySQL engine mysql time type error |
May I ask the MaterializeMySQL engine when a certain field type of mysql is changed to time, the entire library cannot be used, and an error is reported:
`2021.07.15 17:32:08.888869 [ 17577 ] {} <Debug> MaterializeMySQLSyncThread: Skip MySQL event:
=== FormatDescriptionEvent ===
Timestamp: 1626324911
Event Type: FormatDescriptionEvent
Server ID: 683306
Event Size: 119
Log Pos: 123
Flags: 0
Binlog Version: 4
Server Version: 5.7.30-log
Create Timestamp: 0
Event Header Len: 19
2021.07.15 17:32:08.888887 [ 17577 ] {} <Debug> MaterializeMySQLSyncThread: Skip MySQL event:
=== PreviousGTIDsEvent ===
Timestamp: 1626324911
Event Type: PreviousGTIDsEvent
Server ID: 683306
Event Size: 71
Log Pos: 194
Flags: 128
[DryRun Event]
2021.07.15 17:32:08.888914 [ 17577 ] {} <Debug> MaterializeMySQLSyncThread: Skip MySQL event:
=== GTIDEvent ===
Timestamp: 1626333839
Event Type: GTIDEvent
Server ID: 683306
Event Size: 65
Log Pos: 2912
Flags: 0
GTID Next: d230d9a6-e3c1-11eb-b977-b409318409b6:51
2021.07.15 17:32:08.897374 [ 17577 ] {} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2021.07.15 17:32:08.897627 [ 17577 ] {} <Debug> executeQuery: (internal) /*Materialize MySQL step 2: execute MySQL DDL for sync data*/ EXTERNAL DDL FROM MySQL(ods_midea_mss_1, midea_mss) ALTER TABLE `midea_mss`.`sys_user` ADD COLUMN `uname` time NULL AFTER `username`
2021.07.15 17:32:08.898108 [ 17577 ] {} <Error> MaterializeMySQLSyncThread(ods_midea_mss_1): Query EXTERNAL DDL FROM MySQL(ods_midea_mss_1, midea_mss) ALTER TABLE `midea_mss`.`sys_user`
ADD COLUMN `uname` time NULL AFTER `username` wasn't finished successfully: Code: 50, e.displayText() = DB::Exception: Unknown data type family: time, Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8b770fa in /usr/bin/clickhouse
1. DB::DataTypeFactory::findCreatorByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf37e274 in /usr/bin/clickhouse
2. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37d199 in /usr/bin/clickhouse
3. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37cf86 in /usr/bin/clickhouse
4. ? @ 0xf3e62c9 in /usr/bin/clickhouse
5. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37d1a2 in /usr/bin/clickhouse
6. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37cf86 in /usr/bin/clickhouse
7. ? @ 0xfc6fbfb in /usr/bin/clickhouse
8. DB::MySQLInterpreter::InterpreterAlterImpl::getRewrittenQueries(DB::MySQLParser::ASTAlterQuery const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xfc7272f in /usr/bin/clickhouse
9. DB::MySQLInterpreter::InterpreterMySQLDDLQuery<DB::MySQLInterpreter::InterpreterAlterImpl>::execute() @ 0xf8da31e in /usr/bin/clickhouse
10. DB::InterpreterExternalDDLQuery::execute() @ 0xf8d8e03 in /usr/bin/clickhouse
11. ? @ 0xfc44781 in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0xfc42e03 in /usr/bin/clickhouse
13. ? @ 0xf5de654 in /usr/bin/clickhouse
14. DB::MaterializeMySQLSyncThread::executeDDLAtomic(DB::MySQLReplication::QueryEvent const&) @ 0xf5ddf0b in /usr/bin/clickhouse
15. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xf62dc96 in /usr/bin/clickhouse
16. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0xf6306a0 in /usr/bin/clickhouse
17. DB::MaterializeMySQLSyncThread::onEvent(DB::MaterializeMySQLSyncThread::Buffers&, std::__1::shared_ptr<DB::MySQLReplication::EventBase> const&, DB::MaterializeMetadata&) @ 0xf5d8ecf in /usr/bin/clickhouse
18. DB::MaterializeMySQLSyncThread::synchronization() @ 0xf5d5a4a in /usr/bin/clickhouse
19. ? @ 0xf5f8134 in /usr/bin/clickhouse
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8bb741f in /usr/bin/clickhouse
21. ? @ 0x8bba943 in /usr/bin/clickhouse
22. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
23. __clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 21.6.6.51 (official build))
2021.07.15 17:32:08.898196 [ 17577 ] {} <Error> MaterializeMySQLSyncThread: Code: 50, e.displayText() = DB::Exception: Unknown data type family: time: While executing MYSQL_QUERY_EVENT. The query: ALTER TABLE `midea_mss`.`sys_user`
ADD COLUMN `uname` time NULL AFTER `username`, Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8b770fa in /usr/bin/clickhouse
1. DB::DataTypeFactory::findCreatorByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf37e274 in /usr/bin/clickhouse
2. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37d199 in /usr/bin/clickhouse
3. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37cf86 in /usr/bin/clickhouse
4. ? @ 0xf3e62c9 in /usr/bin/clickhouse
5. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37d1a2 in /usr/bin/clickhouse
6. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37cf86 in /usr/bin/clickhouse
7. ? @ 0xfc6fbfb in /usr/bin/clickhouse
8. DB::MySQLInterpreter::InterpreterAlterImpl::getRewrittenQueries(DB::MySQLParser::ASTAlterQuery const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xfc7272f in /usr/bin/clickhouse
9. DB::MySQLInterpreter::InterpreterMySQLDDLQuery<DB::MySQLInterpreter::InterpreterAlterImpl>::execute() @ 0xf8da31e in /usr/bin/clickhouse
10. DB::InterpreterExternalDDLQuery::execute() @ 0xf8d8e03 in /usr/bin/clickhouse
11. ? @ 0xfc44781 in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0xfc42e03 in /usr/bin/clickhouse
13. ? @ 0xf5de654 in /usr/bin/clickhouse
14. DB::MaterializeMySQLSyncThread::executeDDLAtomic(DB::MySQLReplication::QueryEvent const&) @ 0xf5ddf0b in /usr/bin/clickhouse
15. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xf62dc96 in /usr/bin/clickhouse
16. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0xf6306a0 in /usr/bin/clickhouse
17. DB::MaterializeMySQLSyncThread::onEvent(DB::MaterializeMySQLSyncThread::Buffers&, std::__1::shared_ptr<DB::MySQLReplication::EventBase> const&, DB::MaterializeMetadata&) @ 0xf5d8ecf in /usr/bin/clickhouse
18. DB::MaterializeMySQLSyncThread::synchronization() @ 0xf5d5a4a in /usr/bin/clickhouse
19. ? @ 0xf5f8134 in /usr/bin/clickhouse
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8bb741f in /usr/bin/clickhouse
21. ? @ 0x8bba943 in /usr/bin/clickhouse
22. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
23. __clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 21.6.6.51 (official build))
2021.07.15 17:32:08.898356 [ 17577 ] {} <Error> MaterializeMySQLSyncThread: Code: 50, e.displayText() = DB::Exception: Unknown data type family: time: While executing MYSQL_QUERY_EVENT. The query: ALTER TABLE `midea_mss`.`sys_user`
ADD COLUMN `uname` time NULL AFTER `username`, Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8b770fa in /usr/bin/clickhouse
1. DB::DataTypeFactory::findCreatorByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf37e274 in /usr/bin/clickhouse
2. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37d199 in /usr/bin/clickhouse
3. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37cf86 in /usr/bin/clickhouse
4. ? @ 0xf3e62c9 in /usr/bin/clickhouse
5. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37d1a2 in /usr/bin/clickhouse
6. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xf37cf86 in /usr/bin/clickhouse
7. ? @ 0xfc6fbfb in /usr/bin/clickhouse
8. DB::MySQLInterpreter::InterpreterAlterImpl::getRewrittenQueries(DB::MySQLParser::ASTAlterQuery const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xfc7272f in /usr/bin/clickhouse
9. DB::MySQLInterpreter::InterpreterMySQLDDLQuery<DB::MySQLInterpreter::InterpreterAlterImpl>::execute() @ 0xf8da31e in /usr/bin/clickhouse
10. DB::InterpreterExternalDDLQuery::execute() @ 0xf8d8e03 in /usr/bin/clickhouse
11. ? @ 0xfc44781 in /usr/bin/clickhouse
12. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0xfc42e03 in /usr/bin/clickhouse
13. ? @ 0xf5de654 in /usr/bin/clickhouse
14. DB::MaterializeMySQLSyncThread::executeDDLAtomic(DB::MySQLReplication::QueryEvent const&) @ 0xf5ddf0b in /usr/bin/clickhouse
15. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xf62dc96 in /usr/bin/clickhouse
16. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0xf6306a0 in /usr/bin/clickhouse
17. DB::MaterializeMySQLSyncThread::onEvent(DB::MaterializeMySQLSyncThread::Buffers&, std::__1::shared_ptr<DB::MySQLReplication::EventBase> const&, DB::MaterializeMetadata&) @ 0xf5d8ecf in /usr/bin/clickhouse
18. DB::MaterializeMySQLSyncThread::synchronization() @ 0xf5d5a4a in /usr/bin/clickhouse
19. ? @ 0xf5f8134 in /usr/bin/clickhouse
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8bb741f in /usr/bin/clickhouse
21. ? @ 0x8bba943 in /usr/bin/clickhouse
22. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
23. __clone @ 0xfdead in /usr/lib64/libc-2.17.so
(version 21.6.6.51 (official build))` | https://github.com/ClickHouse/ClickHouse/issues/26361 | https://github.com/ClickHouse/ClickHouse/pull/33429 | 677a7f1133c7e176dc38b291d54f63f0207e8799 | 9e91a9dfd1dae8072d9d2132a4b3e4bbb70e1c1d | "2021-07-15T10:33:31Z" | c++ | "2022-01-26T08:29:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,325 | ["src/Interpreters/MergeJoin.cpp", "src/Processors/Transforms/JoiningTransform.cpp", "tests/queries/0_stateless/01943_pmj_non_joined_stuck.reference", "tests/queries/0_stateless/01943_pmj_non_joined_stuck.sql"] | Infinite loop in "partial merge JOIN". | Thankfully "partial merge join" is disabled by default.
```
milovidov-desktop :) SET max_block_size = 6, join_algorithm = 'partial_merge'
SET max_block_size = 6, join_algorithm = 'partial_merge'
Query id: 600704b7-b455-4726-8a9c-8e45b509f2fb
Ok.
0 rows in set. Elapsed: 0.002 sec.
milovidov-desktop :) SELECT blockSize() bs FROM (SELECT 1 s) js1 ALL RIGHT JOIN (SELECT arrayJoin([2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3]) s) js2 USING (s) GROUP BY bs ORDER BY bs;
SELECT blockSize() AS bs
FROM
(
SELECT 1 AS s
) AS js1
ALL RIGHT JOIN
(
SELECT arrayJoin([2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3]) AS s
) AS js2 USING (s)
GROUP BY bs
ORDER BY bs ASC
Query id: ae51c589-4b69-4e90-a3a9-7ef4ead72af9
→ Progress: 2.00 rows, 2.00 B (18.21 rows/s., 18.21 B/s.)
```
Found by stress test. | https://github.com/ClickHouse/ClickHouse/issues/26325 | https://github.com/ClickHouse/ClickHouse/pull/26374 | 427813071d2482e9fef32c8627719375e003f757 | c8ead44c23a67d536aef035e095a6d97a9aecef7 | "2021-07-14T19:07:08Z" | c++ | "2021-07-16T06:50:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,303 | ["src/Core/Settings.h", "src/Functions/array/range.cpp", "tests/queries/0_stateless/01944_range_max_elements.reference", "tests/queries/0_stateless/01944_range_max_elements.sql"] | Safety threshold on data volume is triggered for function `range`. | test
```
DROP TABLE IF EXISTS range_issue;
CREATE TABLE range_issue
ENGINE = Log AS
SELECT range(rand() % 601) AS arr
FROM numbers(21000000)
```
It's regression:
20.7 - last version when test passes.
20.8 - first version when test fails.
Error message & stacktrace:
```
Received exception from server (version 21.9.1):
Code: 69. DB::Exception: Received from localhost:9000. DB::Exception: A call to function range would produce 314754485 array elements, which is greater than the allowed maximum of 100000000: while executing 'FUNCTION range(modulo(rand(), 601) :: 2) -> range(modulo(rand(), 601)) Array(UInt16) : 1'. Stack trace:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8f9817a in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
1. COW<DB::IColumn>::immutable_ptr<DB::IColumn> DB::FunctionRange::executeInternal<unsigned short>(DB::IColumn const*) const @ 0xed7f527 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
2. DB::FunctionRange::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xed7d422 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
3. DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb12828e in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
4. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfa7621e in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
5. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfa76712 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
6. DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x100d6e15 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
7. DB::ExpressionTransform::transform(DB::Chunk&) @ 0x1119b6dc in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
8. DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x1119ba70 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
9. DB::ISimpleTransform::work() @ 0x1119e947 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
10. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x11051d7d in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
11. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1104e911 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
12. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x1104c94f in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
13. DB::PipelineExecutor::execute(unsigned long) @ 0x1104c72d in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
14. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x110599bf in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fd903f in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
16. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x8fdc923 in /usr/lib/debug/.build-id/8c/ba2568d486f3b979445af26ce56a0deaa65ba7.debug
17. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
18. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
| https://github.com/ClickHouse/ClickHouse/issues/26303 | https://github.com/ClickHouse/ClickHouse/pull/26305 | a3742c58f91d8439a2eec7ff954732f45bb4d6c7 | 11a4b56f51871ea543b3bd4510446f208fbddeb8 | "2021-07-14T07:24:40Z" | c++ | "2021-07-14T20:15:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,252 | ["src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp"] | excessive/uncontrolled logging of EmbeddedRocksDB |
```
.../dbname/tablename$ ls -l
total 76600
-rw-r----- 1 clickhouse clickhouse 1175 Jul 9 06:41 000008.sst
-rw-r----- 1 clickhouse clickhouse 80 Jul 9 07:16 000015.log
-rw-r----- 1 clickhouse clickhouse 16 Jul 9 06:42 CURRENT
-rw-r----- 1 clickhouse clickhouse 37 Jul 9 06:38 IDENTITY
-rw-r----- 1 clickhouse clickhouse 0 Jul 9 06:38 LOCK
-rw-r----- 1 clickhouse clickhouse 64020 Jul 9 08:02 LOG
-rw-r----- 1 clickhouse clickhouse 23819 Jul 9 06:40 LOG.old.1625838114977096
-rw-r----- 1 clickhouse clickhouse 27520 Jul 9 06:42 LOG.old.1625838154318798
-rw-r----- 1 clickhouse clickhouse 137 Jul 9 06:42 MANIFEST-000014
-rw-r----- 1 clickhouse clickhouse 6075 Jul 9 06:41 OPTIONS-000012
-rw-r----- 1 clickhouse clickhouse 6075 Jul 9 06:42 OPTIONS-000017
```
Part of the LOG file:
```
2021/07/09-06:42:34.319227 7f7384cf1000 RocksDB version: 6.15.0
2021/07/09-06:42:34.319295 7f7384cf1000 Git sha rocksdb_build_git_sha:@54a0decabbcf4c0bb5cf7befa9c597f28289bff5@
2021/07/09-06:42:34.319298 7f7384cf1000 Compile date Jun 2 2021
2021/07/09-06:42:34.319377 7f7384cf1000 DB SUMMARY
2021/07/09-06:42:34.319380 7f7384cf1000 DB Session ID: AN44HOX7JE4ELDVCK4QB
2021/07/09-06:42:34.319441 7f7384cf1000 CURRENT file: CURRENT
2021/07/09-06:42:34.319444 7f7384cf1000 IDENTITY file: IDENTITY
2021/07/09-06:42:34.319449 7f7384cf1000 MANIFEST file: MANIFEST-000009 size: 137 Bytes
2021/07/09-06:42:34.319452 7f7384cf1000 SST files in /data/clickhouse/data/Alloy/TableVersions/ dir, Total Num: 1, files: 000008.s
st
2021/07/09-06:42:34.319481 7f7384cf1000 Write Ahead Log file in /data/clickhouse/data/Alloy/TableVersions: 000010.log size: 0 ;
2021/07/09-06:42:34.319484 7f7384cf1000 Options.error_if_exists: 0
2021/07/09-06:42:34.319485 7f7384cf1000 Options.create_if_missing: 1
2021/07/09-06:42:34.319486 7f7384cf1000 Options.paranoid_checks: 1
2021/07/09-06:42:34.319488 7f7384cf1000 Options.track_and_verify_wals_in_manifest: 0
2021/07/09-06:42:34.319489 7f7384cf1000 Options.env: 0x13ca9be8
2021/07/09-06:42:34.319491 7f7384cf1000 Options.fs: Posix File System
2021/07/09-06:42:34.319492 7f7384cf1000 Options.info_log: 0x7f73a7555c60
2021/07/09-06:42:34.319493 7f7384cf1000 Options.max_file_opening_threads: 16
2021/07/09-06:42:34.319494 7f7384cf1000 Options.statistics: (nil)
2021/07/09-06:42:34.319496 7f7384cf1000 Options.use_fsync: 0
2021/07/09-06:42:34.319497 7f7384cf1000 Options.max_log_file_size: 0
...
...
2021/07/09-06:42:34.333954 7f7376feb000 [db/db_impl/db_impl.cc:902] ------- DUMPING STATS -------
2021/07/09-06:42:34.333976 7f7376feb000 [db/db_impl/db_impl.cc:903]
** DB Stats **
Uptime(secs): 0.0 total, 0.0 interval
Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0 1/0 1.15 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
Sum 1/0 1.15 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0
** Compaction Stats [default] **
Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
....
```
It looks like we need to control it via:
https://github.com/facebook/rocksdb/blob/5afd1e309c6959bf192393e4957e0c83234db4fe/include/rocksdb/options.h#L495-L499
and maybe forward those logs to the main clickhouse logs? | https://github.com/ClickHouse/ClickHouse/issues/26252 | https://github.com/ClickHouse/ClickHouse/pull/26789 | 7b76bfc7196d5585368fd0b1f2ece44a0a806cb9 | 00a0bbd4055ee2765c4afe113f19e10e55604b9c | "2021-07-12T15:21:12Z" | c++ | "2021-07-26T03:46:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,233 | ["src/Functions/abtesting.cpp", "src/Functions/abtesting.h", "src/Functions/registerFunctions.cpp", "src/Functions/tests/gtest_abtesting.cpp", "tests/queries/0_stateless/01411_bayesian_ab_testing.reference", "tests/queries/0_stateless/01411_bayesian_ab_testing.sql"] | Remove function `bayesAB` or break it in backward incompatible way. | This function is strange. Looks like the implementation is of low quality.
```
bool isDeterministic() const override { return false; }
bool isDeterministicInScopeOfQuery() const override { return false; }
```
Why it is not deterministic? Missing a comment.
```
String convertToJson(const PODArray<String> & variant_names, const Variants & variants)
{
FormatSettings settings;
WriteBufferFromOwnString buf;
writeCString("{\"data\":[", buf);
for (size_t i = 0; i < variants.size(); ++i)
{
writeCString("{\"variant_name\":", buf);
writeJSONString(variant_names[i], buf, settings);
writeCString(",\"x\":", buf);
writeText(variants[i].x, buf);
writeCString(",\"y\":", buf);
writeText(variants[i].y, buf);
writeCString(",\"beats_control\":", buf);
writeText(variants[i].beats_control, buf);
writeCString(",\"to_be_best\":", buf);
writeText(variants[i].best, buf);
writeCString("}", buf);
if (i != variant_names.size() -1)
writeCString(",", buf);
}
writeCString("]}", buf);
return buf.str();
}
```
It returns unstructured data? :scream:
As first step we can remove documentation for this function.
CC @nikitamikhaylov | https://github.com/ClickHouse/ClickHouse/issues/26233 | https://github.com/ClickHouse/ClickHouse/pull/29934 | 9a1f930b2fc2e6fb476ec4a85e16cd86c38c8675 | 7c67d764c0281e079b57c1bb7a042ab3dd3ffb37 | "2021-07-12T04:26:21Z" | c++ | "2021-10-12T15:00:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,216 | ["src/Core/PostgreSQL/insertPostgreSQLValue.cpp", "src/Databases/PostgreSQL/fetchPostgreSQLTableStructure.cpp", "src/Storages/PostgreSQL/StorageMaterializedPostgreSQL.cpp", "src/Storages/StoragePostgreSQL.cpp", "tests/integration/test_postgresql_replica_database_engine/test.py", "tests/integration/test_storage_postgresql/test.py"] | timestamp format should convert to DateTime64 not DateTime in postgres engine |
**Describe the unexpected behaviour**
currently data format name started with `timestamp`(timestamp/timestamptz) all converted to DateTime in postgres engine, this will losing precision of millisecond and microsecond, so they should be regarded as DtateTime64 data format
**Expected behavior**
they should be regarded as DtateTime64 data format
| https://github.com/ClickHouse/ClickHouse/issues/26216 | https://github.com/ClickHouse/ClickHouse/pull/26234 | e5e3a889847a09f071a6e5618b08407fa099dd64 | e42a26a58511792a8d8019a19f39c3ac4e9a1262 | "2021-07-11T04:50:55Z" | c++ | "2021-07-14T20:42:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,175 | ["src/AggregateFunctions/AggregateFunctionSparkbar.cpp", "src/AggregateFunctions/AggregateFunctionSparkbar.h", "src/AggregateFunctions/registerAggregateFunctions.cpp", "tests/queries/0_stateless/02016_aggregation_spark_bar.reference", "tests/queries/0_stateless/02016_aggregation_spark_bar.sql"] | `sparkbar` aggregate function | See https://github.com/deeplook/sparklines
**Describe the solution you'd like**
```
sparkbar(width)(x, y)
sparkbar(width, min_x, max_x)(x, y)
▁▂▅▆▇▆▅
```
An aggregate function calculates a histogram of `y` by `x` and then reshapes it to `width` buckets if the histogram is larger.
Linear weighted average is used for reshaping. If multiple values correspond to a single bucket, they are also averaged.
The function returns a string.
The height of bar is proportional to the difference between `y` and the minimum value of `y` on the observed data (and then rounded to the nearest neighbor for rendering). The value of `y` can be negative but the bar height is always non negative. There are 8 different heights that can be rendered plus one additional character - whitespace that is used for absense of the value.
The boundaries for x and y can be calculated automatically or provided in parameters.
**Further info**
Similar function can use ANSI escape sequences for 24-bit color to render heatmaps.
We can encode server's memory usage during query processing directly inside the famous progress bar in clickhouse-client. | https://github.com/ClickHouse/ClickHouse/issues/26175 | https://github.com/ClickHouse/ClickHouse/pull/27481 | d4823d41b01259a7f5cb7361c84f57aacecd34ea | b02c8073461baba02a98bbda70b88e8d4ed02123 | "2021-07-10T05:20:10Z" | c++ | "2021-09-12T18:29:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,151 | ["src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp", "src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h", "tests/queries/0_stateless/02013_bloom_filter_hasAll.reference", "tests/queries/0_stateless/02013_bloom_filter_hasAll.sql"] | hasAll() filter condition not relying on bloom filter; re-writing as has() AND has()... does use bloom_filter | **Describe the situation**
hasAll() filter condition not relying on bloom filter, but re-writing as series of has() AND has()... (or series of indexOf conditions) does use bloom filter
**How to reproduce**
* ClickHouse server version: 21.2.7.2
Test Table:
- 33M records, 10 columns
- MergeTree() engine, has a sort order specified
- Has a field STRING_ARRAY that is a Array(String) column. It has a bloom filter data-skipping index (`TYPE bloom_filter(0.01) GRANULARITY 1`)
The following query takes 1.5 seconds:
```
SELECT * FROM db_name.table_name
WHERE hasAll(STRING_ARRAY, ['value_1', 'value_2', 'value_3']
```
Running `EXPLAIN indexes = 1`, only the primary key is picked up. The bloom filter isn't used.
The following query takes 0.45 seconds:
```
SELECT * FROM db_name.table_name
WHERE has(STRING_ARRAY, 'value_1') AND has(STRING_ARRAY, 'value_2') AND has(STRING_ARRAY, 'value_3')
```
Running `EXPLAIN indexes = 1`, the bloom filter is used.
**Expected performance**
Expecting the hasAll() filter condition to also get performance improvement from the bloom filter on STRING_ARRAY.
| https://github.com/ClickHouse/ClickHouse/issues/26151 | https://github.com/ClickHouse/ClickHouse/pull/27984 | b6ec22a08c85edb5a7b74547f8e31117ac3ac152 | 0db8b524f05cb70c47b694cf4293bf8798ef10e4 | "2021-07-09T20:23:54Z" | c++ | "2021-08-23T19:19:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,149 | ["docker/test/fasttest/run.sh", "docker/test/stateless/Dockerfile", "src/IO/ZlibInflatingReadBuffer.cpp", "tests/queries/0_stateless/02013_zlib_read_after_eof.go", "tests/queries/0_stateless/02013_zlib_read_after_eof.reference", "tests/queries/0_stateless/02013_zlib_read_after_eof.sh", "tests/queries/0_stateless/data_zlib/02013_zlib_read_after_eof_data"] | Attempt to read after eof with enabled data compression on carbon-clickhouse | **Describe the bug**
CH throws an `Attempt to read after eof` exception when `carbon-clickhouse` with enabled gzip compression (`compress-data true` in `carbon-clickhouse` config) tries to insert 5000+ metrics.
**Does it reproduce on recent release?**
Reproduces on 21.6.6.51 (stable) and v21.3.14.1 (LTS).
**How to reproduce**
***Prerequisites***
1. `carbon-clickhouse` - `clickhouse` [default scheme](https://github.com/lomik/carbon-clickhouse) inside docker
2. `compress-data` enabled in `carbon-clickhouse` config
***How it works***
When sending metrics manually by curl, it goes ok.
[metrics_rowbinary.txt](https://github.com/ClickHouse/ClickHouse/files/6793464/metrics_rowbinary.txt)
```
cat /tmp/metrics_rowbinary.txt | gzip -c | curl 'http://localhost:8123/?query=INSERT%20INTO%20graphite%20(Path,%20Value,%20Time,%20Date,%20Timestamp)%20FORMAT%20RowBinary' --data-binary @- -H 'Content-Encoding: gzip'
```
***How it doesn't work***
When sending metrics to `carbon-clickhouse` plaintext port, CH throws an exception.
[metrics.txt](https://github.com/ClickHouse/ClickHouse/files/6793465/metrics.txt)
```
cat /tmp/metrics.txt | nc -q0 localhost 2003
```
When `compress-data` is disabled in `carbon-clickhouse`, it goes with no exception too.
**Error message and/or stacktrace**
<details>
<summary>Stacktrace</summary>
<pre>
clickhouse_1 | 2021.07.09 18:56:18.756162 [ 100 ] {9d25478d-a23f-44aa-90a6-13bf6984413e} <Error> DynamicQueryHandler: Code: 32, e.displayText() = DB::Exception: Attempt to read after eof, Stack trace (when copying this message, always include the lines below):
clickhouse_1 |
clickhouse_1 | 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8b770fa in /usr/bin/clickhouse
clickhouse_1 | 1. DB::throwReadAfterEOF() @ 0x8b88a1f in /usr/bin/clickhouse
clickhouse_1 | 2. ? @ 0x8bb26ab in /usr/bin/clickhouse
clickhouse_1 | 3. DB::SerializationString::deserializeBinary(DB::IColumn&, DB::ReadBuffer&) const @ 0xf43c5df in /usr/bin/clickhouse
clickhouse_1 | 4. DB::BinaryRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0x105273cd in /usr/bin/clickhouse
clickhouse_1 | 5. DB::IRowInputFormat::generate() @ 0x1051ccc8 in /usr/bin/clickhouse
clickhouse_1 | 6. DB::ISource::tryGenerate() @ 0x104a97d5 in /usr/bin/clickhouse
clickhouse_1 | 7. DB::ISource::work() @ 0x104a93ba in /usr/bin/clickhouse
clickhouse_1 | 8. DB::InputStreamFromInputFormat::readImpl() @ 0xdb0883f in /usr/bin/clickhouse
clickhouse_1 | 9. DB::IBlockInputStream::read() @ 0xf348eb2 in /usr/bin/clickhouse
clickhouse_1 | 10. DB::InputStreamFromASTInsertQuery::readImpl() @ 0xf8ff690 in /usr/bin/clickhouse
clickhouse_1 | 11. DB::IBlockInputStream::read() @ 0xf348eb2 in /usr/bin/clickhouse
clickhouse_1 | 12. DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic<bool>*) @ 0xf36c66f in /usr/bin/clickhouse
clickhouse_1 | 13. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>) @ 0xfc48461 in /usr/bin/clickhouse
clickhouse_1 | 14. DB::HTTPHandler::processQuery(std::__1::shared_ptr<DB::Context>, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x103fd93f in /usr/bin/clickhouse
clickhouse_1 | 15. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x104010c9 in /usr/bin/clickhouse
clickhouse_1 | 16. DB::HTTPServerConnection::run() @ 0x10484f50 in /usr/bin/clickhouse
clickhouse_1 | 17. Poco::Net::TCPServerConnection::start() @ 0x12a7a50f in /usr/bin/clickhouse
clickhouse_1 | 18. Poco::Net::TCPServerDispatcher::run() @ 0x12a7bf9a in /usr/bin/clickhouse
clickhouse_1 | 19. Poco::PooledThread::run() @ 0x12bb52f9 in /usr/bin/clickhouse
clickhouse_1 | 20. Poco::ThreadImpl::runnableEntry(void*) @ 0x12bb12ea in /usr/bin/clickhouse
clickhouse_1 | 21. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse_1 | 22. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
</pre>
</details>
[PCAP](https://github.com/ClickHouse/ClickHouse/files/6793478/clickhouse.zip)
| https://github.com/ClickHouse/ClickHouse/issues/26149 | https://github.com/ClickHouse/ClickHouse/pull/28150 | 5bc332c40cff1434167e353899e9e56816140cee | fc37817adae7974192cf4f781ab74fb488ae43b3 | "2021-07-09T19:16:48Z" | c++ | "2021-08-26T14:17:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,086 | ["src/Access/SettingsProfileElement.h", "src/Access/UsersConfigAccessStorage.cpp"] | When using XML/YAML based user configuration if profile or quota is not defined then default ones are used | Previously (until RBAC was introduced) users were not allowed to connect to ClickHouse if user definition referenced a non-existing profile/quota. `DB::Exception: There is no profile 'does_not_exist' in configuration file..`
After RBAC was introduced this behavior changed and now users can connect and they are assigned default profile. This is unexpected and is easy to make mistakes (eg typos) that would give users more privileges than expected (allow_ddl, readonly).
cc @vitlibar | https://github.com/ClickHouse/ClickHouse/issues/26086 | https://github.com/ClickHouse/ClickHouse/pull/38024 | 6384fe23c336b40c832cef928b0ec44f2947f438 | 1a71e44b28e1a2cbfe393c928bae5acb11569b6e | "2021-07-08T14:53:25Z" | c++ | "2022-07-03T09:26:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,078 | ["src/Storages/MergeTree/MergeTreeData.cpp"] | `system.detached_parts` doesn't works after change `detault` storage policy to multiple disk | **Describe the bug**
After adding the second disk to `default` storage policy, any questions to system.detached_parts return error
**Does it reproduce on recent release?**
yes reproduced, and any LTS release 20.3 (ordinary), 20.8 21.3
**How to reproduce**
```bash
git clone https://gist.github.com/c17d50ca4b2e2785ce957debc7548420.git ./detached_and_storage_policy
cd ./detached_and_storage_policy
bash -x steps-to-reproduce.sh
```
* Queries to run that lead to unexpected result
```
SELECT
count() AS detached_parts,
database,
table,
disk,
if(coalesce(reason,'unknown')='','detached_by_user',coalesce(reason,'unknown')) AS detach_reason
FROM system.detached_parts
GROUP BY
database,
table,
disk,
reason
```
**Expected behavior**
empty result
**Error message and/or stacktrace**
```
2021.07.08 13:01:53.805096 [ 51 ] {52b361e2-8e58-4757-859e-7f867a3f7a8b} <Error> executeQuery: Poco::Exception. Code: 1000, e.code() = 2, e.displayText() = File not found: /var/lib/clickhouse2/store/8f9/8f932fde-af5a-43e1-8f93-2fdeaf5af3e1/detached (version 21.6.6.51 (official build)) (from 127.0.0.1:60496) (in query: SELECT count() AS detached_parts, database, table, disk, if(coalesce(reason,'unknown')='','detached_by_user',coalesce(reason,'unknown')) AS detach_reason FROM system.detached_parts GROUP BY database, table, disk, reason ), Stack trace (when copying this message, always include the lines below):
0. Poco::FileImpl::handleLastErrorImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x12b47ab5 in /usr/bin/clickhouse
1. Poco::DirectoryIteratorImpl::DirectoryIteratorImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x12b30b97 in /usr/bin/clickhouse
2. Poco::DirectoryIterator::DirectoryIterator(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x12b30e3f in /usr/bin/clickhouse
3. DB::DiskLocalDirectoryIterator::DiskLocalDirectoryIterator(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xf4a5b9d in /usr/bin/clickhouse
4. DB::DiskLocal::iterateDirectory(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xf4a2643 in /usr/bin/clickhouse
5. DB::MergeTreeData::getDetachedParts() const @ 0x1017d7a4 in /usr/bin/clickhouse
6. DB::StorageSystemDetachedParts::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xef96f79 in /usr/bin/clickhouse
7. DB::IStorage::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context>, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xfe06117 in /usr/bin/clickhouse
8. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0xf921a2f in /usr/bin/clickhouse
9. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0xf919463 in /usr/bin/clickhouse
10. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0xf918520 in /usr/bin/clickhouse
11. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0xfa888de in /usr/bin/clickhouse
12. DB::InterpreterSelectWithUnionQuery::execute() @ 0xfa899b1 in /usr/bin/clickhouse
13. ? @ 0xfc44781 in /usr/bin/clickhouse
14. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0xfc42e03 in /usr/bin/clickhouse
15. DB::TCPHandler::runImpl() @ 0x10468bb2 in /usr/bin/clickhouse
16. DB::TCPHandler::run() @ 0x1047bab9 in /usr/bin/clickhouse
17. Poco::Net::TCPServerConnection::start() @ 0x12a7a50f in /usr/bin/clickhouse
18. Poco::Net::TCPServerDispatcher::run() @ 0x12a7bf9a in /usr/bin/clickhouse
19. Poco::PooledThread::run() @ 0x12bb52f9 in /usr/bin/clickhouse
20. Poco::ThreadImpl::runnableEntry(void*) @ 0x12bb12ea in /usr/bin/clickhouse
21. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
22. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
**Additional context**
for Ordinary database
```
Received exception from server (version 20.3.21):
Code: 1000. DB::Exception: Received from localhost:9000. DB::Exception: File not found: /var/lib/clickhouse2/data/default/t1/detached.
```
Also a lot of background errors in logs with directory in not exists, but it not affect to successful work | https://github.com/ClickHouse/ClickHouse/issues/26078 | https://github.com/ClickHouse/ClickHouse/pull/26236 | f30e614165183a5dca7276e532f7ff482960463a | 80fed6eebf96feb735b52bad3260de161d26e1fa | "2021-07-08T13:10:25Z" | c++ | "2021-07-12T16:45:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,044 | ["programs/server/play.html"] | Play UI: server URL should be also saved in URL if it differs. | **Use case**
I run https://gh-api.clickhouse.tech/play?user=team#U0VMRUNUIGNoZWNrX25hbWUsIHRlc3RfbmFtZSwgdGVzdF9zdGF0dXMsIGNoZWNrX3N0YXJ0X3RpbWUsICdodHRwczovL2NsaWNraG91c2UtdGVzdC1yZXBvcnRzLnMzLnlhbmRleC5uZXQvMC8nIHx8IGNvbW1pdF9zaGEgfHwgJy9wZXJmb3JtYW5jZV9jb21wYXJpc29uL3JlcG9ydC5odG1sJyBBUyB1cmwgRlJPTSBgZ2gtZGF0YWAuY2hlY2tzCldIRVJFCiAgICBjaGVja19uYW1lIExJS0UgJ1BlcmZvcm1hbmNlJScKICAgIEFORCB0ZXN0X3N0YXR1cyBMSUtFICclZXJyb3IlJwogICAgQU5EIHB1bGxfcmVxdWVzdF9udW1iZXIgPSAwCiAgICBBTkQgY2hlY2tfc3RhcnRfdGltZSA+PSBub3coKSAtIElOVEVSVkFMIDEwMCBEQVkKSEFWSU5HIGNoZWNrX3N0YXJ0X3RpbWUgPj0gbm93KCkgLSBJTlRFUlZBTCAxMCBEQVkKT1JERVIgQlkgY2hlY2tfc3RhcnRfdGltZSBERVNDCkxJTUlUIDEwMA==
but I have to manually specify the host https://rc1a-ity5agjmuhyu6nu9.mdb.yandexcloud.net:8443
because it is not saved in URL. | https://github.com/ClickHouse/ClickHouse/issues/26044 | https://github.com/ClickHouse/ClickHouse/pull/26322 | f1702d356ebbb9a9919117c759efc07264bb4f1a | 28067f73d2c7886ab695c43f2bc2470afa5226b9 | "2021-07-07T00:24:59Z" | c++ | "2021-07-14T19:08:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,993 | ["src/Functions/FunctionJoinGet.h", "tests/queries/0_stateless/01735_join_get_low_card_fix.reference", "tests/queries/0_stateless/01735_join_get_low_card_fix.sql"] | joinGet Error: Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got ColumnLowCardinality | **Describe the bug**
When doing `joinGet(<join table>, <column with LowCardinality(String) type>, <key>)`, ClickHouse returns `Error: Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got ColumnLowCardinality`
**Version**
**21.5.4.6809**
**How to reproduce**
```sql
ClickHouse client version 21.5.4.6809.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.5.4 revision 54448.
x390 :) create table j (a String, b LowCardinality(String), c String) Engine = Join(ANY, LEFT, a)
CREATE TABLE j
(
`a` String,
`b` LowCardinality(String),
`c` String
)
ENGINE = Join(ANY, LEFT, a)
Query id: a373b613-b438-47fe-8fd6-dfabb93f9ed9
Ok.
0 rows in set. Elapsed: 0.005 sec.
x390 :) insert into j values ('a', 'b', 'c')
INSERT INTO j VALUES
Query id: ce8ba17b-43c7-4992-b901-2eb24d7ac2bb
Ok.
1 rows in set. Elapsed: 0.005 sec.
x390 :) select joinGet('j', 'b', 'a')
SELECT joinGet('j', 'b', 'a')
Query id: 52fa0fd7-6967-4fbb-9aea-13fd600ecf6b
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 21.5.4):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got ColumnLowCardinality: While processing joinGet('j', 'b', 'a').
x390 :) select joinGet('j', 'c', 'a')
SELECT joinGet('j', 'c', 'a')
Query id: f125b316-f017-4e6e-9eac-9c5278fda6a9
┌─joinGet('j', 'c', 'a')─┐
│ c │
└────────────────────────┘
1 rows in set. Elapsed: 0.005 sec.
```
**Expected behavior**
```sql
SELECT joinGet('j', 'b', 'a')
┌─joinGet('j', 'b', 'a')─┐
│ b │
└────────────────────────┘
```
**Additional context**
Tested with **20.7.2.30** too | https://github.com/ClickHouse/ClickHouse/issues/25993 | https://github.com/ClickHouse/ClickHouse/pull/26118 | 806bf3d99cc73ee44f386dd5a7d35a9ef358ce03 | f48c5af90c2ad51955d1ee3b6b05d006b03e4238 | "2021-07-05T11:17:19Z" | c++ | "2021-07-09T20:45:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,963 | ["src/Common/ErrorCodes.cpp", "src/Functions/normalizeString.cpp", "src/Functions/registerFunctionsString.cpp", "tests/performance/normalize_utf8.xml", "tests/queries/0_stateless/02011_normalize_utf8.reference", "tests/queries/0_stateless/02011_normalize_utf8.sql"] | Unicode normalization/decomposition functions; diacritics removal. | **Use case**
Text preprocessing and matching inside ClickHouse.
**Describe the solution you'd like**
Wrap functions from `icu` library. The functions should work on UTF-8 data.
**Additional context**
See also a sibling task: #17182.
| https://github.com/ClickHouse/ClickHouse/issues/25963 | https://github.com/ClickHouse/ClickHouse/pull/28633 | 11e37e7640db0685ee2be73d4e3fb418b31f1592 | ec898f1a94c776e44dc27de1277e74ad6265b1ad | "2021-07-03T21:10:23Z" | c++ | "2021-10-12T09:25:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,896 | ["src/Core/MySQL/MySQLReplication.cpp"] | MaterializeMySQL wrong values replicated | I have been getting wrong data in Clickhouse when working with MaterializeMysql database engine. It affects one particular field only that is decimal (32,8) and contains a financial amount (like money, crypto, etc.). I know it is experimental but
The replication works perfectly when I drop the database on Clickhouse and recreated it from scratch (initial replication). The only time the values go haywire is when I start inserting, deleting and updating the source table in MySQL.
**Version:** 21.4.6.55
**Table definition from "show create table"**
CREATE TABLE clickH.actions
(
`id` Int64,
`wallet_id` Int64,
`tx_id` Int64,
`tx_timestamp` Int64,
`counterparty_wallet_id` Int64,
`amount` Decimal(32, 8),
`tx_hash` String,
`_sign` Int8 MATERIALIZED 1,
`_version` UInt64 MATERIALIZED 1,
INDEX _version _version TYPE minmax GRANULARITY 1
)
ENGINE = ReplacingMergeTree(_version)
PARTITION BY intDiv(id, 18446744073709551)
ORDER BY (wallet_id, counterparty_wallet_id, id)
SETTINGS index_granularity = 8192
**example values mysql ARE CORRECT**
**# amount, id**
'-0.00010000', '745620408'
'267.01256666', '784811048'
'-0.00010000', '1056421051'
'-0.09516232', '1057113705'
'-0.00010000', '1084950022'
**same rows clickhouse ARE WRONG**
**amount/id**
8388611294967299294967337.94957295 1056421051
8388611294967299294967337.94957295 745620408
8388611294967299294967337.94957295 1084950022
8388611294967299294967337.85451063 1057113705
8388608000000000000000267.01256666 784811048
Important is to note that those values were correct in the initial sync and got wrong while further updates have been done, NOT ON THOSE ROWS! Ids are loaded incremental.
On the MySql side, i have binlog buffer time on 10000s. Could this have an impact? Otherwise, I don't know what else can.
| https://github.com/ClickHouse/ClickHouse/issues/25896 | https://github.com/ClickHouse/ClickHouse/pull/31990 | fa298b089e9669a8ffb1aaf00d0fbfb922f40f06 | 9e034ee3a5af2914aac4fd9a39fd469286eb9b86 | "2021-07-01T13:32:03Z" | c++ | "2021-12-01T16:18:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,892 | ["src/AggregateFunctions/AggregateFunctionQuantile.cpp", "tests/queries/0_stateless/01936_quantiles_cannot_return_null.reference", "tests/queries/0_stateless/01936_quantiles_cannot_return_null.sql"] | Error if quantiles is used with aggregate_functions_null_for_empty | ```sql
set aggregate_functions_null_for_empty=1;
SELECT quantiles(0.95)(x) FROM ( SELECT 1 x WHERE 0 );
DB::Exception: Nested type Array(Float64) cannot be inside Nullable type.
SELECT quantiles(0.95)(number) FROM ( SELECT number FROM numbers(10) WHERE number > 10 );
DB::Exception: Nested type Array(Float64) cannot be inside Nullable type.
``` | https://github.com/ClickHouse/ClickHouse/issues/25892 | https://github.com/ClickHouse/ClickHouse/pull/25919 | 67a88d64effbda280769897c92b28b4ffd19cf8b | af7a97cae2a0dd2c93efd227f198219b754a3fab | "2021-07-01T12:34:23Z" | c++ | "2021-07-02T22:18:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,891 | ["programs/client/clickhouse-client.xml", "src/Client/ClientBase.cpp"] | clickhouse-client shows "default" (incorrect) database name in the prompt | ```
cat /etc/clickhouse-client/conf.d/propmt.xml
<config>
<prompt_by_server_display_name>
<default>{database}@{host} :) </default>
</prompt_by_server_display_name>
</config>
cat /etc/clickhouse-server/conf.d/default_database.xml
<?xml version="1.0"?>
<yandex>
<default_database>dw</default_database>
</yandex>
$ clickhouse-client
ClickHouse client version 21.6.5.37 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.6.5 revision 54448.
default@localhost :) select currentDatabase();
SELECT currentDatabase()
Query id: efd2952e-a0d5-4250-be3c-5c3a929d3256
┌─currentDatabase()─┐
│ dw │
└───────────────────┘
```
now: `default@localhost :)`
expected: `dw@localhost :)` | https://github.com/ClickHouse/ClickHouse/issues/25891 | https://github.com/ClickHouse/ClickHouse/pull/42508 | 46559d659b438c6b5e1559f90abdb0fd453ae7a4 | 58f238007a4a493313c212a45b4d1f5a66afcfd1 | "2021-07-01T12:26:13Z" | c++ | "2022-10-20T19:14:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,890 | ["README.md", "website/templates/index/community.html"] | The slack link of the Clickhouse is invalid. The Clickhouse community can send an effective link | Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
If you still prefer GitHub issues, remove all this text and ask your question here.
| https://github.com/ClickHouse/ClickHouse/issues/25890 | https://github.com/ClickHouse/ClickHouse/pull/25903 | e23cf79338f261f80543ecb62d1ccbeab31d05ca | aa8d4aea54b0a6fec2f2cf9073e70b406c0e67c9 | "2021-07-01T05:50:03Z" | c++ | "2021-07-01T19:30:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,836 | ["src/Storages/MergeTree/BackgroundJobsExecutor.cpp"] | GET_PART tasks are getting postponed for a long period of time | After a separate pool for background fetches was implemented, some task get stuck in the queue for a long time even though there are threads available in the pool.
Here is an example:
```
-- A new part was created at 19:37
2021.06.24 19:37:09.981174 [ 182994 ] {} <Debug> (ReplicatedMergeTreeQueue): Pulling 1 entries to queue: log-0000005816 - log-0000005816
-- Here it is, in the queue, with the 4th postpone made 30 minutes later:
node_name | create_time | new_part_name | num_tries | num_postponed | postpone_reason | last_postpone_time
-----------------+---------------------+-----------------+-----------+---------------+---------------------------------------------------------+--------------------
queue-0000005816 | 2021-06-24 19:37:09 | all_3194_3194_0 | 0 | 4 | Not executing fetch because 8 fetches already executing | 2021-06-24 20:07:10
-- The task was finally executed after 10 more minutes
2021.06.24 20:17:11.690271 [ 183347 ] {} <Debug> Fetching part all_3194_3194_0
2021.06.24 20:17:12.379247 [ 183347 ] {} <Debug> Fetched part all_3194_3194_0
```
There was plenty of opportunities to run the task earlier.
Here is the whole the queue, it was not long at all:
```
node_name | type | create_time | is_currently_executing | num_tries | last_attempt_time | num_postponed | last_postpone_time
-----------------+-------------+---------------------+------------------------+-----------+---------------------+---------------+--------------------
queue-0006287584 | MERGE_PARTS | 2021-06-24 16:57:55 | 1 | 1 | 2021-06-24 16:57:56 | 0 | 1969-12-31 20:00:00
queue-0006289280 | MERGE_PARTS | 2021-06-24 18:59:30 | 1 | 1 | 2021-06-24 18:59:31 | 0 | 1969-12-31 20:00:00
queue-0006289710 | MERGE_PARTS | 2021-06-24 19:27:53 | 1 | 1 | 2021-06-24 19:27:54 | 0 | 1969-12-31 20:00:00
queue-0000005816 | GET_PART | 2021-06-24 19:37:09 | 0 | 0 | 1969-12-31 20:00:00 | 4 | 2021-06-24 20:07:10
queue-0000026327 | GET_PART | 2021-06-24 20:10:08 | 0 | 0 | 1969-12-31 20:00:00 | 2 | 2021-06-24 20:14:09
queue-0000217611 | MERGE_PARTS | 2021-06-24 20:10:34 | 1 | 1 | 2021-06-24 20:10:43 | 9 | 2021-06-24 20:10:42
queue-0000026328 | GET_PART | 2021-06-24 20:14:09 | 0 | 0 | 1969-12-31 20:00:00 | 1 | 2021-06-24 20:14:09
queue-0006290403 | MERGE_PARTS | 2021-06-24 20:14:26 | 1 | 1 | 2021-06-24 20:14:28 | 5 | 2021-06-24 20:14:28
```
It seems to me that GET_PART tasks always get postponed by 10 minutes unless another part for the same table is created.
```
node_name | type | create_time | num_tries | num_postponed | last_postpone_time
-----------------+-------------+---------------------+-----------+---------------+--------------------
Table1
queue-0000005816 | GET_PART | 2021-06-24 19:37:09 | 0 | 4 | 2021-06-24 20:07:10 -- the unfortunate table got postponed 4 times for 10 minutes (the last postpone is still in effect)
Table2
queue-0000026327 | GET_PART | 2021-06-24 20:10:08 | 0 | 2 | 2021-06-24 20:14:09 -- this task got checked and postponed when the next task was pulled
queue-0000026328 | GET_PART | 2021-06-24 20:14:09 | 0 | 1 | 2021-06-24 20:14:09
``` | https://github.com/ClickHouse/ClickHouse/issues/25836 | https://github.com/ClickHouse/ClickHouse/pull/25893 | 081695fd05998161cf6150c22c4399ee53d87dea | 7b4a56977dc0de8fef8dee7151a0ad0086b4ff60 | "2021-06-29T20:56:05Z" | c++ | "2021-07-03T15:03:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,821 | ["src/Interpreters/Context.cpp"] | Crash on when shutting down the server with compile_expressions enabled | Build master/HEAD on fcff8dc3d59808b3410196143c66ae537096949d
Debug build with:
```
CC=clang CXX=clang++ cmake -DCMAKE_BUILD_TYPE=Debug ..
ninja
```
```
clang version 12.0.0
ninja version 1.10.2
```
**Describe the bug**
The server crashes during shutdown.
Started the server (Note that this is a clean path and default config):
```
./programs/clickhouse server --config-file=../programs/server/config.xml
```
Launch a stateless tests that uses JIT (on a different tab):
```
PATH=$PATH:build_head_asserts/programs/ tests/clickhouse-test 01278_min_insert_block_size_rows_for_materialized_views
```
Once the test has finished, stop the server (Ctrl+C):
```
[...]
2021.06.29 17:29:27.067302 [ 466554 ] {} <Debug> Application: Shut down storages.
2021.06.29 17:29:27.067683 [ 466554 ] {} <Debug> MemoryTracker: Peak memory usage (for user): 2.00 MiB.
2021.06.29 17:29:27.071756 [ 466554 ] {} <Debug> Application: Destroyed global context.
2021.06.29 17:29:27.081111 [ 466554 ] {} <Information> Application: shutting down
2021.06.29 17:29:27.081174 [ 466554 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2021.06.29 17:29:27.081260 [ 466555 ] {} <Trace> BaseDaemon: Received signal -2
2021.06.29 17:29:27.081360 [ 466555 ] {} <Information> BaseDaemon: Stop SignalListener thread
Segmentation fault (core dumped)
```
**Does it reproduce on recent release?**
I've only tested in master/HEAD
**Enable crash reporting**
Done, no idea if it's useful:
```
2021.06.29 17:39:47.679502 [ 472860 ] {} <Information> SentryWriter: Sending crash reports is initialized with https://[email protected]/5226277 endpoint and ./tmp/sentry temp folder
2021.06.29 17:39:47.679608 [ 472860 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
* Trying 34.120.195.249:443...
* Connected to o388870.ingest.sentry.io () port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=*.ingest.sentry.io
* start date: Jun 26 01:07:31 2021 GMT
* expire date: Sep 24 01:07:30 2021 GMT
* subjectAltName: host "o388870.ingest.sentry.io" matched cert's "*.ingest.sentry.io"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
> POST /api/5226277/envelope/ HTTP/1.1
Host: o388870.ingest.sentry.io
User-Agent: sentry.native/0.3.4
Accept: */*
x-sentry-auth:Sentry sentry_key=6f33034cfe684dd7a3ab9875e57b1c8d, sentry_version=7, sentry_client=sentry.native/0.3.4
content-type:application/x-sentry-envelope
content-length:303
* upload completely sent off: 303 out of 303 bytes
* old SSL session ID is stale, removing
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 29 Jun 2021 15:39:47 GMT
< Content-Type: application/json
< Content-Length: 2
< access-control-expose-headers: x-sentry-rate-limits, retry-after, x-sentry-error
< vary: Origin
< x-envoy-upstream-service-time: 0
< Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
< Via: 1.1 google
< Alt-Svc: clear
<
{}* Connection #0 to host o388870.ingest.sentry.io left intact
```
**Error message and/or stacktrace**
Coredump backtrace:
```
#0 0x000000001db46890 in std::__1::__hash_table<std::__1::__hash_value_type<unsigned long, std::__1::unique_ptr<DB::JITModuleMemoryManager, std::__1::default_delete<DB::JITModuleMemoryManager> > >, std::__1::__unordered_map_hasher<unsigned long, std::__1::__hash_value_type<unsigned long, std::__1::unique_ptr<DB::JITModuleMemoryManager, std::__1::default_delete<DB::JITModuleMemoryManager> > >, std::__1::hash<unsigned long>, std::__1::equal_to<unsigned long>, true>, std::__1::__unordered_map_equal<unsigned long, std::__1::__hash_value_type<unsigned long, std::__1::unique_ptr<DB::JITModuleMemoryManager, std::__1::default_delete<DB::JITModuleMemoryManager> > >, std::__1::equal_to<unsigned long>, std::__1::hash<unsigned long>, true>, std::__1::allocator<std::__1::__hash_value_type<unsigned long, std::__1::unique_ptr<DB::JITModuleMemoryManager, std::__1::default_delete<DB::JITModuleMemoryManager> > > > >::find<unsigned long> (this=0x26aa10c8 <DB::getJITInstance()::jit+504>, __k=@0x7f462c39c0c8: 7)
at ../contrib/libcxx/include/__hash_table:2394
#1 0x000000001db35f8a in std::__1::unordered_map<unsigned long, std::__1::unique_ptr<DB::JITModuleMemoryManager, std::__1::default_delete<DB::JITModuleMemoryManager> >, std::__1::hash<unsigned long>, std::__1::equal_to<unsigned long>, std::__1::allocator<std::__1::pair<unsigned long const, std::__1::unique_ptr<DB::JITModuleMemoryManager, std::__1::default_delete<DB::JITModuleMemoryManager> > > > >::find (this=0x26aa10c8 <DB::getJITInstance()::jit+504>, __k=@0x7f462c39c0c8: 7)
at ../contrib/libcxx/include/unordered_map:1352
#2 0x000000001db33ad6 in DB::CHJIT::deleteCompiledModule (this=0x26aa0ed0 <DB::getJITInstance()::jit>, module_info=...) at ../src/Interpreters/JIT/CHJIT.cpp:273
#3 0x000000001d131456 in DB::CompiledFunction::~CompiledFunction (this=0x7f462c39c0b8) at ../src/Interpreters/ExpressionJIT.cpp:58
#4 0x000000001d131419 in std::__1::allocator<DB::CompiledFunction>::destroy (this=0x7ffdfbc9faa0, __p=0x7f462c39c0b8) at ../contrib/libcxx/include/memory:891
#5 0x000000001d1313dd in std::__1::allocator_traits<std::__1::allocator<DB::CompiledFunction> >::__destroy<DB::CompiledFunction> (__a=..., __p=0x7f462c39c0b8) at ../contrib/libcxx/include/__memory/allocator_traits.h:539
#6 0x000000001d13139d in std::__1::allocator_traits<std::__1::allocator<DB::CompiledFunction> >::destroy<DB::CompiledFunction> (__a=..., __p=0x7f462c39c0b8) at ../contrib/libcxx/include/__memory/allocator_traits.h:487
#7 0x000000001d13103b in std::__1::__shared_ptr_emplace<DB::CompiledFunction, std::__1::allocator<DB::CompiledFunction> >::__on_zero_shared (this=0x7f462c39c0a0) at ../contrib/libcxx/include/memory:2611
#8 0x0000000012228951 in std::__1::__shared_count::__release_shared (this=0x7f462c39c0a0) at ../contrib/libcxx/include/memory:2475
#9 0x00000000122288f9 in std::__1::__shared_weak_count::__release_shared (this=0x7f462c39c0a0) at ../contrib/libcxx/include/memory:2517
#10 0x000000001d11ec8c in std::__1::shared_ptr<DB::CompiledFunction>::~shared_ptr (this=0x7f45e7e56af8) at ../contrib/libcxx/include/memory:3212
#11 0x000000001d12cab5 in DB::CompiledFunctionCacheEntry::~CompiledFunctionCacheEntry (this=0x7f45e7e56af8) at ../src/Interpreters/ExpressionJIT.h:16
#12 0x000000001d12ca99 in std::__1::allocator<DB::CompiledFunctionCacheEntry>::destroy (this=0x7ffdfbc9fbe0, __p=0x7f45e7e56af8) at ../contrib/libcxx/include/memory:891
#13 0x000000001d12ca5d in std::__1::allocator_traits<std::__1::allocator<DB::CompiledFunctionCacheEntry> >::__destroy<DB::CompiledFunctionCacheEntry> (__a=..., __p=0x7f45e7e56af8) at ../contrib/libcxx/include/__memory/allocator_traits.h:539
#14 0x000000001d12ca1d in std::__1::allocator_traits<std::__1::allocator<DB::CompiledFunctionCacheEntry> >::destroy<DB::CompiledFunctionCacheEntry> (__a=..., __p=0x7f45e7e56af8) at ../contrib/libcxx/include/__memory/allocator_traits.h:487
#15 0x000000001d12c73b in std::__1::__shared_ptr_emplace<DB::CompiledFunctionCacheEntry, std::__1::allocator<DB::CompiledFunctionCacheEntry> >::__on_zero_shared (this=0x7f45e7e56ae0) at ../contrib/libcxx/include/memory:2611
#16 0x0000000012228951 in std::__1::__shared_count::__release_shared (this=0x7f45e7e56ae0) at ../contrib/libcxx/include/memory:2475
#17 0x00000000122288f9 in std::__1::__shared_weak_count::__release_shared (this=0x7f45e7e56ae0) at ../contrib/libcxx/include/memory:2517
#18 0x000000001d127c0c in std::__1::shared_ptr<DB::CompiledFunctionCacheEntry>::~shared_ptr (this=0x7f4634c646e0) at ../contrib/libcxx/include/memory:3212
#19 0x000000001d12fe95 in DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell::~Cell (this=0x7f4634c646e0) at ../src/Common/LRUCache.h:170
#20 0x000000001d12fe79 in std::__1::pair<wide::integer<128ul, unsigned int> const, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>::~pair (this=0x7f4634c646d0)
at ../contrib/libcxx/include/utility:297
#21 0x000000001d12fe2d in std::__1::destroy_at<std::__1::pair<wide::integer<128ul, unsigned int> const, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell> > (
__loc=0x7f4634c646d0) at ../contrib/libcxx/include/__memory/base.h:118
#22 0x000000001d12fdb9 in std::__1::allocator_traits<std::__1::allocator<std::__1::__hash_node<std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, void*> > >::__destroy<std::__1::pair<wide::integer<128ul, unsigned int> const, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell> > (__p=0x7f4634c646d0) at ../contrib/libcxx/include/__memory/allocator_traits.h:547
#23 0x000000001d12fd3d in std::__1::allocator_traits<std::__1::allocator<std::__1::__hash_node<std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, void*> > >::destroy<std::__1::pair<wide::integer<128ul, unsigned int> const, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell> > (__a=..., __p=0x7f4634c646d0) at ../contrib/libcxx/include/__memory/allocator_traits.h:487
#24 0x000000001d133921 in std::__1::__hash_table<std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, std::__1::__unordered_map_hasher<wide::integer<128ul, unsigned int>, std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, UInt128Hash, std::__1::equal_to<wide::integer<128ul, unsigned int> >, true>, std::__1::__unordered_map_equal<wide::integer<128ul, unsigned int>, std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, std::__1::equal_to<wide::integer<128ul, unsigned int> >, UInt128Hash, true>, std::__1::allocator<std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell> > >::__deallocate_node (this=0x7f47001f0508, __np=0x7f4634c646c0)
at ../contrib/libcxx/include/__hash_table:1580
#25 0x000000001d1338a9 in std::__1::__hash_table<std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, std::__1::__unordered_map_hasher<wide::integer<128ul, unsigned int>, std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, UInt128Hash, std::__1::equal_to<wide::integer<128ul, unsigned int> >, true>, std::__1::__unordered_map_equal<wide::integer<128ul, unsigned int>, std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell>, std::__1::equal_to<wide::integer<128ul, unsigned int> >, UInt128Hash, true>, std::__1::allocator<std::__1::__hash_value_type<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell> > >::~__hash_table (this=0x7f47001f0508) at ../contrib/libcxx/include/__hash_table:1519
#26 0x000000001d132bf5 in std::__1::unordered_map<wide::integer<128ul, unsigned int>, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell, UInt128Hash, std::__1::equal_to<wide::integer<128ul, unsigned int> >, std::__1::allocator<std::__1::pair<wide::integer<128ul, unsigned int> const, DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::Cell> > >::~unordered_map (this=0x7f47001f0508) at ../contrib/libcxx/include/unordered_map:1044
#27 0x000000001d132c48 in DB::LRUCache<wide::integer<128ul, unsigned int>, DB::CompiledFunctionCacheEntry, UInt128Hash, DB::CompiledFunctionWeightFunction>::~LRUCache (this=0x7f47001f0500) at ../src/Common/LRUCache.h:164
#28 0x000000001d132b15 in DB::CompiledExpressionCache::~CompiledExpressionCache (this=0x7f47001f0500) at ../src/Interpreters/ExpressionJIT.h:45
#29 0x000000001d132b39 in DB::CompiledExpressionCache::~CompiledExpressionCache (this=0x7f47001f0500) at ../src/Interpreters/ExpressionJIT.h:45
#30 0x000000001d13288c in std::__1::default_delete<DB::CompiledExpressionCache>::operator() (this=0x26aa0e58 <DB::CompiledExpressionCacheFactory::instance()::factory>, __ptr=0x7f47001f0500) at ../contrib/libcxx/include/memory:1397
#31 0x000000001d1327fc in std::__1::unique_ptr<DB::CompiledExpressionCache, std::__1::default_delete<DB::CompiledExpressionCache> >::reset (this=0x26aa0e58 <DB::CompiledExpressionCacheFactory::instance()::factory>, __p=0x0)
at ../contrib/libcxx/include/memory:1658
#32 0x000000001d11cbd9 in std::__1::unique_ptr<DB::CompiledExpressionCache, std::__1::default_delete<DB::CompiledExpressionCache> >::~unique_ptr (this=0x26aa0e58 <DB::CompiledExpressionCacheFactory::instance()::factory>)
at ../contrib/libcxx/include/memory:1612
#33 0x000000001d11ca55 in DB::CompiledExpressionCacheFactory::~CompiledExpressionCacheFactory (this=0x26aa0e58 <DB::CompiledExpressionCacheFactory::instance()::factory>) at ../src/Interpreters/ExpressionJIT.h:52
#34 0x00007f47015af4a7 in __run_exit_handlers () from /usr/lib/libc.so.6
#35 0x00007f47015af64e in exit () from /usr/lib/libc.so.6
#36 0x00007f4701597b2c in __libc_start_main () from /usr/lib/libc.so.6
--Type <RET> for more, q to quit, c to continue without paging--
#37 0x0000000012220bee in _start ()
```
**Additional context**
Reproduces everytime under in my system. If I disable compile_expressions in users.xml (`<compile_expressions>false</compile_expressions>`) it stops crashing and the server shuts down nicely.
Cmake config (from default parameters):
<details>
```
cmake -LA
CMake Warning:
No source or binary directory provided. Both will be assumed to be the
same as the current working directory, but note that this warning will
become a fatal error in future CMake releases.
CMake Error: The source directory "/mnt/ch/ClickHouse/build_head_asserts" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
-- Cache values
ABSL_RUN_TESTS:BOOL=OFF
ABSL_USE_GOOGLETEST_HEAD:BOOL=OFF
ADD_GDB_INDEX_FOR_GOLD:BOOL=OFF
ARCH_NATIVE:BOOL=OFF
AWK_PROGRAM:FILEPATH=/usr/bin/awk
BIN_INSTALL_DIR:PATH=/usr/local/bin
BUG_REPORT_URL:STRING=https://bugs.llvm.org/
BUILD_TESTING:BOOL=ON
Backtrace_HEADER:STRING=execinfo.h
Backtrace_INCLUDE_DIR:PATH=/usr/include
Backtrace_LIBRARY:FILEPATH=
CARES_BUILD_TESTS:BOOL=OFF
CARES_BUILD_TOOLS:BOOL=ON
CARES_INSTALL:BOOL=ON
CARES_SHARED:BOOL=OFF
CARES_STATIC:BOOL=ON
CARES_STATIC_PIC:BOOL=OFF
CASS_BUILD_EXAMPLES:BOOL=OFF
CASS_BUILD_INTEGRATION_TESTS:BOOL=OFF
CASS_BUILD_SHARED:BOOL=ON
CASS_BUILD_STATIC:BOOL=ON
CASS_BUILD_TESTS:BOOL=OFF
CASS_BUILD_UNIT_TESTS:BOOL=OFF
CASS_DEBUG_CUSTOM_ALLOC:BOOL=OFF
CASS_INSTALL_HEADER:BOOL=OFF
CASS_INSTALL_HEADER_IN_SUBDIR:BOOL=OFF
CASS_INSTALL_PKG_CONFIG:BOOL=OFF
CASS_MULTICORE_COMPILATION:BOOL=ON
CASS_USE_BOOST_ATOMIC:BOOL=OFF
CASS_USE_KERBEROS:BOOL=OFF
CASS_USE_LIBSSH2:BOOL=OFF
CASS_USE_OPENSSL:BOOL=ON
CASS_USE_STATIC_LIBS:BOOL=OFF
CASS_USE_STD_ATOMIC:BOOL=ON
CASS_USE_TIMERFD:BOOL=OFF
CASS_USE_ZLIB:BOOL=ON
CCACHE_FOUND:FILEPATH=/usr/bin/ccache
CLANG_FORMAT_EXE:FILEPATH=/usr/bin/clang-format
CMAKE_ADDR2LINE:FILEPATH=/usr/bin/addr2line
CMAKE_AR:FILEPATH=/usr/bin/ar
CMAKE_ASM_COMPILER:FILEPATH=/usr/lib/ccache/bin/clang
CMAKE_ASM_COMPILER_AR:FILEPATH=/usr/bin/llvm-ar
CMAKE_ASM_COMPILER_RANLIB:FILEPATH=/usr/bin/llvm-ranlib
CMAKE_ASM_FLAGS:STRING=
CMAKE_ASM_FLAGS_DEBUG:STRING=-g
CMAKE_ASM_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
CMAKE_ASM_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMAKE_ASM_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
CMAKE_BUILD_TYPE:STRING=Debug
CMAKE_CONFIGURATION_TYPES:STRING=RelWithDebInfo;Debug;Release;MinSizeRel
CMAKE_CXX_COMPILER:FILEPATH=/usr/lib/ccache/bin/clang++
CMAKE_CXX_COMPILER_AR:FILEPATH=/usr/bin/llvm-ar
CMAKE_CXX_COMPILER_RANLIB:FILEPATH=/usr/bin/llvm-ranlib
CMAKE_CXX_FLAGS:STRING=
CMAKE_CXX_FLAGS_DEBUG:STRING=-g
CMAKE_CXX_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMAKE_CXX_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
CMAKE_CXX_STANDARD:STRING=14
CMAKE_C_COMPILER:FILEPATH=/usr/lib/ccache/bin/clang
CMAKE_C_COMPILER_AR:FILEPATH=/usr/bin/llvm-ar
CMAKE_C_COMPILER_RANLIB:FILEPATH=/usr/bin/llvm-ranlib
CMAKE_C_FLAGS:STRING=
CMAKE_C_FLAGS_DEBUG:STRING=-g
CMAKE_C_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMAKE_C_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
CMAKE_DEBUG_POSTFIX:STRING=d
CMAKE_DLLTOOL:FILEPATH=/usr/bin/llvm-dlltool
CMAKE_EXE_LINKER_FLAGS:STRING=
CMAKE_EXE_LINKER_FLAGS_DEBUG:STRING=
CMAKE_EXE_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_EXE_LINKER_FLAGS_RELEASE:STRING=
CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_EXPORT_COMPILE_COMMANDS:BOOL=
CMAKE_INSTALL_BINDIR:PATH=bin
CMAKE_INSTALL_DATADIR:PATH=
CMAKE_INSTALL_DATAROOTDIR:PATH=share
CMAKE_INSTALL_DOCDIR:PATH=
CMAKE_INSTALL_INCLUDEDIR:PATH=include
CMAKE_INSTALL_INFODIR:PATH=
CMAKE_INSTALL_LIBDIR:PATH=lib
CMAKE_INSTALL_LIBEXECDIR:PATH=libexec
CMAKE_INSTALL_LOCALEDIR:PATH=
CMAKE_INSTALL_LOCALSTATEDIR:PATH=var
CMAKE_INSTALL_MANDIR:PATH=
CMAKE_INSTALL_OLDINCLUDEDIR:PATH=/usr/include
CMAKE_INSTALL_PREFIX:PATH=/usr/local
CMAKE_INSTALL_RUNSTATEDIR:PATH=
CMAKE_INSTALL_SBINDIR:PATH=sbin
CMAKE_INSTALL_SHAREDSTATEDIR:PATH=com
CMAKE_INSTALL_SYSCONFDIR:PATH=etc
CMAKE_LINKER:FILEPATH=/usr/bin/ld
CMAKE_MAKE_PROGRAM:FILEPATH=/usr/bin/ninja
CMAKE_MODULE_LINKER_FLAGS:STRING=
CMAKE_MODULE_LINKER_FLAGS_DEBUG:STRING=
CMAKE_MODULE_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_MODULE_LINKER_FLAGS_RELEASE:STRING=
CMAKE_MODULE_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_NM:FILEPATH=/usr/bin/nm
CMAKE_OBJCOPY:FILEPATH=/usr/bin/objcopy
CMAKE_OBJDUMP:FILEPATH=/usr/bin/objdump
CMAKE_RANLIB:FILEPATH=/usr/bin/ranlib
CMAKE_READELF:FILEPATH=/usr/bin/readelf
CMAKE_SHARED_LINKER_FLAGS:STRING=
CMAKE_SHARED_LINKER_FLAGS_DEBUG:STRING=
CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_SHARED_LINKER_FLAGS_RELEASE:STRING=
CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_SKIP_INSTALL_RPATH:BOOL=NO
CMAKE_SKIP_RPATH:BOOL=NO
CMAKE_STATIC_LINKER_FLAGS:STRING=
CMAKE_STATIC_LINKER_FLAGS_DEBUG:STRING=
CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_STATIC_LINKER_FLAGS_RELEASE:STRING=
CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_STRIP:FILEPATH=/usr/bin/strip
CMAKE_VERBOSE_MAKEFILE:BOOL=FALSE
COMPILER_PIPE:BOOL=ON
COVERAGE_COMMAND:FILEPATH=/usr/bin/gcov
COVERAGE_EXTRA_FLAGS:STRING=-l
CPACK_BINARY_DEB:BOOL=OFF
CPACK_BINARY_FREEBSD:BOOL=OFF
CPACK_BINARY_IFW:BOOL=OFF
CPACK_BINARY_NSIS:BOOL=OFF
CPACK_BINARY_RPM:BOOL=OFF
CPACK_BINARY_STGZ:BOOL=ON
CPACK_BINARY_TBZ2:BOOL=OFF
CPACK_BINARY_TGZ:BOOL=ON
CPACK_BINARY_TXZ:BOOL=OFF
CPACK_BINARY_TZ:BOOL=ON
CPACK_SOURCE_RPM:BOOL=OFF
CPACK_SOURCE_TBZ2:BOOL=ON
CPACK_SOURCE_TGZ:BOOL=ON
CPACK_SOURCE_TXZ:BOOL=ON
CPACK_SOURCE_TZ:BOOL=ON
CPACK_SOURCE_ZIP:BOOL=OFF
CTEST_SUBMIT_RETRY_COUNT:STRING=3
CTEST_SUBMIT_RETRY_DELAY:STRING=5
CURL_DIR:PATH=CURL_DIR-NOTFOUND
CURL_FOUND:BOOL=ON
CURL_INCLUDE_DIR:PATH=/mnt/ch/ClickHouse/contrib/curl/include
CURL_INCLUDE_DIRS:PATH=/mnt/ch/ClickHouse/contrib/curl/include
CURL_LIBRARIES:STRING=curl
CURL_LIBRARY:STRING=curl
CURL_ROOT_DIR:PATH=/mnt/ch/ClickHouse/contrib/curl
CURL_VERSION_STRING:STRING=7.67.0
DART_TESTING_TIMEOUT:STRING=1500
ENABLED_LOCAL_INFILE:STRING=AUTO
ENABLE_AMQPCPP:BOOL=ON
ENABLE_AVRO:BOOL=ON
ENABLE_BASE64:BOOL=ON
ENABLE_BROTLI:BOOL=ON
ENABLE_CAPNP:BOOL=ON
ENABLE_CASSANDRA:BOOL=ON
ENABLE_CCACHE:BOOL=ON
ENABLE_CHECK_HEAVY_BUILDS:BOOL=OFF
ENABLE_CLANG_TIDY:BOOL=OFF
ENABLE_CLICKHOUSE_ALL:BOOL=ON
ENABLE_CLICKHOUSE_BENCHMARK:BOOL=ON
ENABLE_CLICKHOUSE_CLIENT:BOOL=ON
ENABLE_CLICKHOUSE_COMPRESSOR:BOOL=ON
ENABLE_CLICKHOUSE_COPIER:BOOL=ON
ENABLE_CLICKHOUSE_EXTRACT_FROM_CONFIG:BOOL=ON
ENABLE_CLICKHOUSE_FORMAT:BOOL=ON
ENABLE_CLICKHOUSE_GIT_IMPORT:BOOL=ON
ENABLE_CLICKHOUSE_INSTALL:BOOL=ON
ENABLE_CLICKHOUSE_KEEPER:BOOL=ON
ENABLE_CLICKHOUSE_KEEPER_CONVERTER:BOOL=ON
ENABLE_CLICKHOUSE_LIBRARY_BRIDGE:BOOL=ON
ENABLE_CLICKHOUSE_LOCAL:BOOL=ON
ENABLE_CLICKHOUSE_OBFUSCATOR:BOOL=ON
ENABLE_CLICKHOUSE_ODBC_BRIDGE:BOOL=ON
ENABLE_CLICKHOUSE_SERVER:BOOL=ON
ENABLE_CPUID:BOOL=ON
ENABLE_CURL:BOOL=ON
ENABLE_CYRUS_SASL:BOOL=ON
ENABLE_DATASKETCHES:BOOL=ON
ENABLE_EMBEDDED_COMPILER:BOOL=ON
ENABLE_EXAMPLES:BOOL=OFF
ENABLE_EXPERIMENTAL_NEW_PASS_MANAGER:BOOL=FALSE
ENABLE_FASTOPS:BOOL=ON
ENABLE_FUZZING:BOOL=OFF
ENABLE_GRPC:BOOL=ON
ENABLE_GSASL_LIBRARY:BOOL=ON
ENABLE_H3:BOOL=ON
ENABLE_HDFS:BOOL=ON
ENABLE_HYPERSCAN:BOOL=ON
ENABLE_ICU:BOOL=ON
ENABLE_JEMALLOC:BOOL=ON
ENABLE_KRB5:BOOL=ON
ENABLE_LDAP:BOOL=ON
ENABLE_LIBPQXX:BOOL=ON
ENABLE_LIBRARIES:BOOL=ON
ENABLE_MSGPACK:BOOL=ON
ENABLE_MULTITARGET_CODE:BOOL=ON
ENABLE_MYSQL:BOOL=ON
ENABLE_NURAFT:BOOL=ON
ENABLE_ODBC:BOOL=ON
ENABLE_ORC:BOOL=ON
ENABLE_PARQUET:BOOL=ON
ENABLE_PROTOBUF:BOOL=ON
ENABLE_RAPIDJSON:BOOL=ON
ENABLE_RDKAFKA:BOOL=ON
ENABLE_REPLXX:BOOL=ON
ENABLE_ROCKSDB:BOOL=ON
ENABLE_S3:BOOL=ON
ENABLE_SSE:BOOL=ON
ENABLE_SSL:BOOL=ON
ENABLE_STATS:BOOL=ON
ENABLE_TESTS:BOOL=ON
FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION:BOOL=ON
FASTFLOAT_SANITIZE:BOOL=OFF
FFI_INCLUDE_DIR:PATH=
FFI_LIBRARY_DIR:PATH=
FLATBUFFERS_BUILD_CPP17:BOOL=OFF
FLATBUFFERS_BUILD_FLATC:BOOL=ON
FLATBUFFERS_BUILD_FLATHASH:BOOL=ON
FLATBUFFERS_BUILD_FLATLIB:BOOL=ON
FLATBUFFERS_BUILD_GRPCTEST:BOOL=OFF
FLATBUFFERS_BUILD_LEGACY:BOOL=OFF
FLATBUFFERS_BUILD_SHAREDLIB:BOOL=OFF
FLATBUFFERS_BUILD_TESTS:BOOL=OFF
FLATBUFFERS_CODE_COVERAGE:BOOL=OFF
FLATBUFFERS_CODE_SANITIZE:BOOL=OFF
FLATBUFFERS_ENABLE_PCH:BOOL=OFF
FLATBUFFERS_INSTALL:BOOL=ON
FLATBUFFERS_LIBCXX_WITH_CLANG:BOOL=ON
FLATBUFFERS_PACKAGE_DEBIAN:BOOL=OFF
FLATBUFFERS_PACKAGE_REDHAT:BOOL=OFF
FLATBUFFERS_STATIC_FLATC:BOOL=OFF
GCEM_BUILD_TESTS:BOOL=OFF
GCEM_CMAKECONFIG_INSTALL_DIR:STRING=lib/cmake/gcem
GIT:FILEPATH=/usr/bin/git
GITCOMMAND:FILEPATH=/usr/bin/git
GIT_EXECUTABLE:FILEPATH=/usr/bin/git
GLIBC_COMPATIBILITY:BOOL=ON
GOLD_EXECUTABLE:FILEPATH=/usr/bin/ld.gold
GOLD_PATH:FILEPATH=/usr/bin/ld.gold
GO_EXECUTABLE:FILEPATH=/usr/bin/go
GSSAPI_FLAVOR:STRING=MIT
GSSAPI_FOUND:STRING=TRUE
GSSAPI_INCS:STRING=
GSSAPI_LIBS:STRING=-lgssapi_krb5 -lkrb5 -lk5crypto -lcom_err
GTest_DIR:PATH=/usr/lib64/cmake/GTest
Gflags_DIR:PATH=Gflags_DIR-NOTFOUND
HAVE_LIBNSL:BOOL=OFF
ICONV_INCLUDE_DIR:PATH=/usr/include
ICONV_LIBRARIES:FILEPATH=ICONV_LIBRARIES-NOTFOUND
INC_INSTALL_DIR:PATH=/usr/local/include
INSTALL_LAYOUT:STRING=DEFAULT
INSTALL_UTILS:BOOL=OFF
JEMALLOC_CONFIG_MALLOC_CONF_OVERRIDE:STRING=
KRB5_CONFIG:FILEPATH=/usr/bin/krb5-config
LIBRT:FILEPATH=/usr/lib/librt.a
LIB_INSTALL_DIR:PATH=/usr/local/lib
LINKER_NAME:BOOL=OFF
LLD_PATH:FILEPATH=/usr/bin/ld.lld
LLVM_ABI_BREAKING_CHECKS:STRING=WITH_ASSERTS
LLVM_APPEND_VC_REV:BOOL=ON
LLVM_AR_PATH:FILEPATH=/usr/bin/llvm-ar
LLVM_BINUTILS_INCDIR:PATH=
LLVM_BUILD_32_BITS:BOOL=OFF
LLVM_BUILD_BENCHMARKS:BOOL=OFF
LLVM_BUILD_DOCS:BOOL=OFF
LLVM_BUILD_EXAMPLES:BOOL=OFF
LLVM_BUILD_EXTERNAL_COMPILER_RT:BOOL=OFF
LLVM_BUILD_INSTRUMENTED:STRING=OFF
LLVM_BUILD_INSTRUMENTED_COVERAGE:BOOL=OFF
LLVM_BUILD_LLVM_C_DYLIB:BOOL=OFF
LLVM_BUILD_LLVM_DYLIB:BOOL=OFF
LLVM_BUILD_RUNTIME:BOOL=OFF
LLVM_BUILD_RUNTIMES:BOOL=OFF
LLVM_BUILD_TESTS:BOOL=OFF
LLVM_BUILD_TOOLS:BOOL=OFF
LLVM_BUILD_UTILS:BOOL=OFF
LLVM_CCACHE_BUILD:BOOL=OFF
LLVM_CODESIGNING_IDENTITY:STRING=
LLVM_DEFAULT_TARGET_TRIPLE:STRING=x86_64-unknown-linux-gnu
LLVM_DEPENDENCY_DEBUGGING:BOOL=OFF
LLVM_DYLIB_COMPONENTS:STRING=all
LLVM_ENABLE_ASSERTIONS:BOOL=ON
LLVM_ENABLE_BACKTRACES:BOOL=OFF
LLVM_ENABLE_BINDINGS:BOOL=OFF
LLVM_ENABLE_CRASH_DUMPS:BOOL=OFF
LLVM_ENABLE_CRASH_OVERRIDES:BOOL=OFF
LLVM_ENABLE_DAGISEL_COV:BOOL=OFF
LLVM_ENABLE_DOXYGEN:BOOL=OFF
LLVM_ENABLE_DUMP:BOOL=OFF
LLVM_ENABLE_EXPENSIVE_CHECKS:BOOL=OFF
LLVM_ENABLE_FFI:BOOL=OFF
LLVM_ENABLE_GISEL_COV:BOOL=OFF
LLVM_ENABLE_IDE:BOOL=ON
LLVM_ENABLE_IR_PGO:BOOL=OFF
LLVM_ENABLE_LIBCXX:BOOL=OFF
LLVM_ENABLE_LIBEDIT:BOOL=OFF
LLVM_ENABLE_LIBPFM:BOOL=OFF
LLVM_ENABLE_LIBXML2:STRING=OFF
LLVM_ENABLE_LLD:BOOL=OFF
LLVM_ENABLE_LOCAL_SUBMODULE_VISIBILITY:BOOL=ON
LLVM_ENABLE_LTO:STRING=OFF
LLVM_ENABLE_MODULES:BOOL=OFF
LLVM_ENABLE_MODULE_DEBUGGING:BOOL=OFF
LLVM_ENABLE_OCAMLDOC:BOOL=OFF
LLVM_ENABLE_PEDANTIC:BOOL=ON
LLVM_ENABLE_PER_TARGET_RUNTIME_DIR:BOOL=OFF
LLVM_ENABLE_PLUGINS:BOOL=OFF
LLVM_ENABLE_PROJECTS:STRING=
LLVM_ENABLE_PROJECTS_USED:BOOL=OFF
LLVM_ENABLE_SPHINX:BOOL=OFF
LLVM_ENABLE_STRICT_FIXED_SIZE_VECTORS:BOOL=OFF
LLVM_ENABLE_TERMINFO:BOOL=OFF
LLVM_ENABLE_THREADS:BOOL=ON
LLVM_ENABLE_UNWIND_TABLES:BOOL=ON
LLVM_ENABLE_WARNINGS:BOOL=ON
LLVM_ENABLE_WERROR:BOOL=OFF
LLVM_ENABLE_Z3_SOLVER:BOOL=OFF
LLVM_ENABLE_ZLIB:STRING=OFF
LLVM_EXPERIMENTAL_TARGETS_TO_BUILD:STRING=
LLVM_EXPORT_SYMBOLS_FOR_PLUGINS:BOOL=OFF
LLVM_EXTERNALIZE_DEBUGINFO:BOOL=OFF
LLVM_FORCE_ENABLE_STATS:BOOL=OFF
LLVM_FORCE_USE_OLD_TOOLCHAIN:BOOL=OFF
LLVM_HOST_TRIPLE:STRING=x86_64-unknown-linux-gnu
LLVM_INCLUDE_BENCHMARKS:BOOL=OFF
LLVM_INCLUDE_DOCS:BOOL=OFF
LLVM_INCLUDE_EXAMPLES:BOOL=OFF
LLVM_INCLUDE_GO_TESTS:BOOL=OFF
LLVM_INCLUDE_RUNTIMES:BOOL=OFF
LLVM_INCLUDE_TESTS:BOOL=OFF
LLVM_INCLUDE_TOOLS:BOOL=OFF
LLVM_INCLUDE_UTILS:BOOL=OFF
LLVM_INSTALL_BINUTILS_SYMLINKS:BOOL=OFF
LLVM_INSTALL_CCTOOLS_SYMLINKS:BOOL=OFF
LLVM_INSTALL_DOXYGEN_HTML_DIR:STRING=share/doc/llvm/doxygen-html
LLVM_INSTALL_MODULEMAPS:BOOL=OFF
LLVM_INSTALL_OCAMLDOC_HTML_DIR:STRING=share/doc/llvm/ocaml-html
LLVM_INSTALL_TOOLCHAIN_ONLY:BOOL=OFF
LLVM_INSTALL_UTILS:BOOL=OFF
LLVM_INTEGRATED_CRT_ALLOC:PATH=
LLVM_LIBDIR_SUFFIX:STRING=
LLVM_LIB_FUZZING_ENGINE:PATH=
LLVM_LINK_LLVM_DYLIB:BOOL=OFF
LLVM_LIT_ARGS:STRING=-sv
LLVM_LOCAL_RPATH:FILEPATH=
LLVM_OPTIMIZED_TABLEGEN:BOOL=OFF
LLVM_OPTIMIZE_SANITIZED_BUILDS:BOOL=ON
LLVM_PARALLEL_COMPILE_JOBS:STRING=
LLVM_PARALLEL_LINK_JOBS:STRING=
LLVM_PROFDATA_FILE:FILEPATH=
LLVM_RANLIB_PATH:FILEPATH=/usr/bin/llvm-ranlib
LLVM_SOURCE_PREFIX:STRING=
LLVM_SRPM_USER_BINARY_SPECFILE:FILEPATH=/mnt/ch/ClickHouse/contrib/llvm/llvm/llvm.spec.in
LLVM_STATIC_LINK_CXX_STDLIB:BOOL=OFF
LLVM_TABLEGEN:STRING=llvm-tblgen
LLVM_TARGETS_TO_BUILD:STRING=X86;AArch64
LLVM_TARGET_ARCH:STRING=host
LLVM_TARGET_TRIPLE_ENV:STRING=
LLVM_TEMPORARILY_ALLOW_OLD_TOOLCHAIN:BOOL=OFF
LLVM_TOOLS_INSTALL_DIR:STRING=bin
LLVM_UBSAN_FLAGS:STRING=-fsanitize=undefined -fno-sanitize=vptr,function -fno-sanitize-recover=all
LLVM_USE_FOLDERS:BOOL=ON
LLVM_USE_INTEL_JITEVENTS:BOOL=OFF
LLVM_USE_NEWPM:BOOL=OFF
LLVM_USE_OPROFILE:BOOL=OFF
LLVM_USE_PERF:BOOL=OFF
LLVM_USE_RELATIVE_PATHS_IN_DEBUG_INFO:BOOL=OFF
LLVM_USE_RELATIVE_PATHS_IN_FILES:BOOL=OFF
LLVM_USE_SANITIZER:STRING=
LLVM_USE_SPLIT_DWARF:BOOL=OFF
LLVM_UTILS_INSTALL_DIR:STRING=bin
LLVM_VERSION_PRINTER_SHOW_HOST_TARGET_INFO:BOOL=ON
LLVM_VP_COUNTERS_PER_SITE:STRING=1.5
LLVM_Z3_INSTALL_DIR:STRING=
MAKECOMMAND:STRING=/usr/bin/cmake --build . --config "${CTEST_CONFIGURATION_TYPE}"
MAKE_STATIC_LIBRARIES:BOOL=ON
MEMORYCHECK_COMMAND:FILEPATH=/usr/bin/valgrind
MEMORYCHECK_SUPPRESSIONS_FILE:FILEPATH=
NINJA_PATH:FILEPATH=/usr/bin/ninja
NINJA_VERSION:STRING=1.10.2
OBJCOPY_PATH:FILEPATH=/usr/bin/llvm-objcopy
OCAMLFIND:FILEPATH=OCAMLFIND-NOTFOUND
PARALLEL_COMPILE_JOBS:BOOL=OFF
PARALLEL_LINK_JOBS:BOOL=OFF
PKGCONFIG_INSTALL_DIR:PATH=/usr/local/lib/pkgconfig
PKG_CONFIG_EXECUTABLE:FILEPATH=/usr/bin/pkg-config
PY_PYGMENTS_FOUND:BOOL=OFF
PY_PYGMENTS_LEXERS_C_CPP_FOUND:BOOL=OFF
PY_YAML_FOUND:BOOL=OFF
SANITIZE:BOOL=OFF
SENTRY_BACKEND:STRING=none
SENTRY_BUILD_EXAMPLES:BOOL=OFF
SENTRY_BUILD_FORCE32:BOOL=OFF
SENTRY_BUILD_TESTS:BOOL=OFF
SENTRY_ENABLE_INSTALL:BOOL=OFF
SENTRY_EXPORT_SYMBOLS:BOOL=OFF
SENTRY_LINK_PTHREAD:BOOL=OFF
SENTRY_PIC:BOOL=OFF
SENTRY_TRANSPORT:STRING=curl
SITE:STRING=Mordor
SNAPPY_REQUIRE_AVX:BOOL=OFF
SNAPPY_REQUIRE_AVX2:BOOL=OFF
STRIP_DEBUG_SYMBOLS_FUNCTIONS:BOOL=OFF
TENSORFLOW_AOT_PATH:PATH=
TENSORFLOW_C_LIB_PATH:PATH=
TEST_HDFS_PREFIX:STRING=./
TUKLIB_FAST_UNALIGNED_ACCESS:BOOL=ON
TUKLIB_USE_UNSAFE_TYPE_PUNNING:BOOL=OFF
UNBUNDLED:BOOL=OFF
USEPCRE:BOOL=OFF
USE_AWS_MEMORY_MANAGEMENT:BOOL=OFF
USE_INCLUDE_WHAT_YOU_USE:BOOL=OFF
USE_INTERNAL_AVRO_LIBRARY:BOOL=ON
USE_INTERNAL_AWS_S3_LIBRARY:BOOL=ON
USE_INTERNAL_BOOST_LIBRARY:BOOL=ON
USE_INTERNAL_BROTLI_LIBRARY:BOOL=ON
USE_INTERNAL_CAPNP_LIBRARY:BOOL=ON
USE_INTERNAL_CCTZ_LIBRARY:BOOL=ON
USE_INTERNAL_CURL:BOOL=ON
USE_INTERNAL_DATASKETCHES_LIBRARY:BOOL=ON
USE_INTERNAL_DOUBLE_CONVERSION_LIBRARY:BOOL=ON
USE_INTERNAL_FARMHASH_LIBRARY:BOOL=ON
USE_INTERNAL_GRPC_LIBRARY:BOOL=ON
USE_INTERNAL_GTEST_LIBRARY:BOOL=ON
USE_INTERNAL_H3_LIBRARY:BOOL=ON
USE_INTERNAL_HDFS3_LIBRARY:BOOL=ON
USE_INTERNAL_HYPERSCAN_LIBRARY:BOOL=ON
USE_INTERNAL_ICU_LIBRARY:BOOL=ON
USE_INTERNAL_LDAP_LIBRARY:BOOL=ON
USE_INTERNAL_LIBCXX_LIBRARY:BOOL=ON
USE_INTERNAL_LIBGSASL_LIBRARY:BOOL=ON
USE_INTERNAL_LIBXML2_LIBRARY:BOOL=ON
USE_INTERNAL_LZ4_LIBRARY:BOOL=ON
USE_INTERNAL_MSGPACK_LIBRARY:BOOL=ON
USE_INTERNAL_MYSQL_LIBRARY:BOOL=ON
USE_INTERNAL_ODBC_LIBRARY:BOOL=ON
USE_INTERNAL_ORC_LIBRARY:BOOL=ON
USE_INTERNAL_PARQUET_LIBRARY:BOOL=ON
USE_INTERNAL_POCO_LIBRARY:BOOL=ON
USE_INTERNAL_PROTOBUF_LIBRARY:BOOL=ON
USE_INTERNAL_RAPIDJSON_LIBRARY:BOOL=ON
USE_INTERNAL_RDKAFKA_LIBRARY:BOOL=ON
USE_INTERNAL_RE2_LIBRARY:BOOL=ON
USE_INTERNAL_REPLXX_LIBRARY:BOOL=ON
USE_INTERNAL_ROCKSDB_LIBRARY:BOOL=ON
USE_INTERNAL_SNAPPY_LIBRARY:BOOL=ON
USE_INTERNAL_SPARSEHASH_LIBRARY:BOOL=ON
USE_INTERNAL_SSL_LIBRARY:BOOL=ON
USE_INTERNAL_XZ_LIBRARY:BOOL=ON
USE_INTERNAL_ZLIB_LIBRARY:BOOL=ON
USE_INTERNAL_ZSTD_LIBRARY:BOOL=ON
USE_LIBCXX:BOOL=ON
USE_SENTRY:BOOL=ON
USE_SIMDJSON:BOOL=ON
USE_SNAPPY:BOOL=ON
USE_STATIC_LIBRARIES:BOOL=ON
USE_UNWIND:BOOL=ON
USE_YAML_CPP:BOOL=ON
VERSION_EXTRA:STRING=
VERSION_TWEAK:STRING=
WERROR:BOOL=ON
WEVERYTHING:BOOL=ON
WITH_ASAN_OPTION:BOOL=OFF
WITH_AVX2:BOOL=ON
WITH_CODE_COVERAGE:BOOL=OFF
WITH_COVERAGE:BOOL=OFF
WITH_DYNCOL:BOOL=ON
WITH_FALLOCATE:BOOL=ON
WITH_FOLLY_DISTRIBUTED_MUTEX:BOOL=ON
WITH_FUZZERS:BOOL=OFF
WITH_GZFILEOP:BOOL=ON
WITH_INFLATE_ALLOW_INVALID_DIST:BOOL=OFF
WITH_INFLATE_STRICT:BOOL=OFF
WITH_JEMALLOC:BOOL=OFF
WITH_LZ4:BOOL=ON
WITH_MAINTAINER_WARNINGS:BOOL=OFF
WITH_MYSQLCOMPAT:BOOL=OFF
WITH_NEW_STRATEGIES:BOOL=ON
WITH_OPTIM:BOOL=ON
WITH_PCLMULQDQ:BOOL=ON
WITH_SANITIZER:BOOL=OFF
WITH_SNAPPY:BOOL=ON
WITH_SSE2:BOOL=ON
WITH_SSE4:BOOL=ON
WITH_SSL:BOOL=ON
WITH_SSSE3:BOOL=ON
WITH_TSAN_OPTION:BOOL=OFF
WITH_UNALIGNED:BOOL=ON
WITH_UNIT_TESTS:BOOL=ON
WITH_ZLIB:BOOL=ON
WITH_ZSTD:BOOL=ON
ZLIB_DUAL_LINK:BOOL=OFF
gRPC_ABSL_PROVIDER:STRING=clickhouse
gRPC_BACKWARDS_COMPATIBILITY_MODE:BOOL=OFF
gRPC_BUILD_CODEGEN:BOOL=ON
gRPC_BUILD_GRPC_CPP_PLUGIN:BOOL=ON
gRPC_BUILD_GRPC_CSHARP_PLUGIN:BOOL=ON
gRPC_BUILD_GRPC_NODE_PLUGIN:BOOL=ON
gRPC_BUILD_GRPC_OBJECTIVE_C_PLUGIN:BOOL=ON
gRPC_BUILD_GRPC_PHP_PLUGIN:BOOL=ON
gRPC_BUILD_GRPC_PYTHON_PLUGIN:BOOL=ON
gRPC_BUILD_GRPC_RUBY_PLUGIN:BOOL=ON
gRPC_BUILD_TESTS:BOOL=OFF
gRPC_CARES_PROVIDER:STRING=module
gRPC_INSTALL:BOOL=OFF
gRPC_INSTALL_BINDIR:STRING=bin
gRPC_INSTALL_CMAKEDIR:STRING=lib/cmake/grpc
gRPC_INSTALL_INCLUDEDIR:STRING=include
gRPC_INSTALL_LIBDIR:STRING=lib
gRPC_INSTALL_SHAREDIR:STRING=share/grpc
gRPC_PROTOBUF_PACKAGE_TYPE:STRING=
gRPC_PROTOBUF_PROVIDER:STRING=clickhouse
gRPC_RE2_PROVIDER:STRING=clickhouse
gRPC_SSL_PROVIDER:STRING=clickhouse
gRPC_USE_PROTO_LITE:BOOL=OFF
gRPC_ZLIB_PROVIDER:STRING=clickhouse
gtest_build_samples:BOOL=OFF
gtest_build_tests:BOOL=OFF
gtest_disable_pthreads:BOOL=OFF
gtest_force_shared_crt:BOOL=OFF
gtest_hide_internal_symbols:BOOL=OFF
liblzma_INSTALL_CMAKEDIR:STRING=lib/cmake/liblzma
pkg-config:FILEPATH=pkg-config-NOTFOUND
pkgcfg_lib_PC_CURL_curl:FILEPATH=/usr/lib/libcurl.so
pkgcfg_lib__LIBUV_dl:FILEPATH=/usr/lib/libdl.a
pkgcfg_lib__LIBUV_uv:FILEPATH=/usr/lib/libuv.so
pkgcfg_lib__OPENSSL_crypto:FILEPATH=/usr/lib/libcrypto.so
pkgcfg_lib__OPENSSL_ssl:FILEPATH=/usr/lib/libssl.so
protobuf_BUILD_CONFORMANCE:BOOL=OFF
protobuf_BUILD_EXAMPLES:BOOL=OFF
protobuf_BUILD_PROTOC_BINARIES:BOOL=ON
protobuf_DEBUG_POSTFIX:STRING=d
protobuf_MODULE_COMPATIBLE:BOOL=OFF
protobuf_MSVC_STATIC_RUNTIME:BOOL=ON
protobuf_VERBOSE:BOOL=OFF
```
</details> | https://github.com/ClickHouse/ClickHouse/issues/25821 | https://github.com/ClickHouse/ClickHouse/pull/25835 | 3b5468d4a43e0f2f863ce6b90b2d62bb4a425fc0 | e0d04edade321d3e11c245c22751a5fd1b094ed8 | "2021-06-29T15:44:11Z" | c++ | "2021-06-29T22:47:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,806 | ["src/DataTypes/EnumValues.h", "src/Functions/FunctionHelpers.cpp", "src/Functions/FunctionHelpers.h", "src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/00850_global_join_dups.reference", "tests/queries/0_stateless/00850_global_join_dups.sql", "tests/queries/0_stateless/01736_null_as_default.reference", "tests/queries/0_stateless/01736_null_as_default.sql"] | Setting use_join_nulls to 1 may throw an exception unexpectedly | **Describe the bug**
A combination of `use_join_nulls=1`, a `Nullable(enum)` column, `OUTER JOIN`, and `toString()` may throw an exception while generating a result.
**Does it reproduce on recent release?**
This happens on 21.6.3 stable.
**How to reproduce**
With the following setup:
```sql
CREATE TABLE a (
key String
) ENGINE=MergeTree ORDER BY key;
CREATE TABLE b (
key String,
data Nullable(Enum16('a'=1, 'b'=2))
) ENGINE=MergeTree ORDER BY key;
INSERT INTO a VALUES('x');
INSERT INTO a VALUES('y');
INSERT INTO b VALUES('x', 'a');
```
This query appears to work correctly:
```
SELECT
key, data
FROM a
ANY LEFT OUTER JOIN b USING (key)
FORMAT JSONCompact
SETTINGS join_use_nulls = 1;
```
```
{
"meta":
[
{
"name": "key",
"type": "String"
},
{
"name": "data",
"type": "Nullable(Enum16('a' = 1, 'b' = 2))"
}
],
"data":
[
["y", null],
["x", "a"]
],
"rows": 2,
"statistics":
{
"elapsed": 0.002955967,
"rows_read": 3,
"bytes_read": 33
}
}
```
But it throws an exception when `toString()` is applied to the joined column:
```sql
SELECT
key, toString(data)
FROM a
LEFT OUTER JOIN b USING (key)
FORMAT JSONCompact
SETTINGS join_use_nulls = 1;
```
```
{
"meta":
[
{
"name": "key",
"type": "String"
},
{
"name": "toString(data)",
"type": "Nullable(String)"
}
],
"data":
[
["x", "a"] Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.)
1 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.6.3):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Unexpected value 0 in enum: while executing 'FUNCTION toString(data :: 1) -> toString(data) Nullable(String) : 2'.
```
**Expected behavior**
It produces a string `NULL` without throwing an exception. | https://github.com/ClickHouse/ClickHouse/issues/25806 | https://github.com/ClickHouse/ClickHouse/pull/26123 | c01497a80a7157a6bd2e3c7e3624940fe64b7ddf | c6177bd0ccfac7151764d42ca1af8114ea146db1 | "2021-06-29T06:27:33Z" | c++ | "2021-07-14T07:22:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,785 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02480_interval_casting_and_subquery.reference", "tests/queries/0_stateless/02480_interval_casting_and_subquery.sql"] | Interval data type is not passed correctly from subquery | You have to provide the following information whenever possible.
**Describe the bug**
Interval data type is not passed correctly from subquery
**Does it reproduce on recent release?**
Yes, tested on 21.6.5.37.
**How to reproduce**
```sql
SELECT
toIntervalDay(5) AS interval,
now() + interval AS res
┌─interval─┬─────────────────res─┐
│ 5 │ 2021-07-03 13:11:45 │
└──────────┴─────────────────────┘
```
But if I wrap interval calculation in subquery:
```sql
SELECT
(SELECT toIntervalDay(5)) AS interval,
now() + interval AS res
Received exception from server (version 21.6.5):
Code: 70. DB::Exception: Received from localhost:9000. DB::Exception: Conversion from Int8 to IntervalDay is not supported: While processing CAST(5, 'IntervalDay') AS interval, now() + interval AS res.
```
**Expected behavior**
Both queries should work the same way. | https://github.com/ClickHouse/ClickHouse/issues/25785 | https://github.com/ClickHouse/ClickHouse/pull/43193 | b7df9aa3da3d9d791902f9aabbf7e6d7ebbfaa79 | d885b3cc4c5514e4e8a090e085041e589060a03a | "2021-06-28T13:15:49Z" | c++ | "2022-11-14T12:32:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,744 | ["docs/en/getting-started/example-datasets/index.md", "docs/en/getting-started/example-datasets/menus.md"] | New York Public Library menus dataset | http://menus.nypl.org/data
A small dataset (1.5 million records), nice toy example.
Copyright-free. | https://github.com/ClickHouse/ClickHouse/issues/25744 | https://github.com/ClickHouse/ClickHouse/pull/27488 | 60afe2e251795604eac238ddba14ac1c970052c5 | d245eb1705018d1413c0db6985f47ea348c10c6d | "2021-06-27T16:59:50Z" | c++ | "2021-08-09T19:49:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,718 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreePartInfo.cpp", "src/Storages/MergeTree/MergeTreePartInfo.h", "tests/queries/0_stateless/01925_broken_partition_id_zookeeper.reference", "tests/queries/0_stateless/01925_broken_partition_id_zookeeper.sql"] | Validate partition ID before DROP PARTITION | How to reproduce bug:
```
CREATE TABLE broken_partition
(
date Date,
key UInt64
)
ENGINE = ReplicatedMergeTree('/clickhouse/test_01925_{database}/rmt', 'r1')
ORDER BY tuple()
PARTITION BY date;
ALTER TABLE broken_partition DROP PARTITION ID '20210325_0_13241_6_12747';
ALTER TABLE broken_partition DROP PARTITION ID '20210325_0_13241_6_12747';
2021.06.25 16:56:47.994597 [ 12133 ] {} <Fatal> BaseDaemon: (version 21.7.1.1, build id: 4D0F6EE29A7F8AAB48AB6DE9984AB510674ABDB0) (from thread 12263) Terminate called for uncaught exception:
2021.06.25 16:56:47.995991 [ 12396 ] {} <Fatal> BaseDaemon: ########################################
2021.06.25 16:56:47.996217 [ 12396 ] {} <Fatal> BaseDaemon: (version 21.7.1.1, build id: 4D0F6EE29A7F8AAB48AB6DE9984AB510674ABDB0) (from thread 12263) (no query) Received signal Aborted (6)
2021.06.25 16:56:47.996344 [ 12396 ] {} <Fatal> BaseDaemon:
2021.06.25 16:56:47.996526 [ 12396 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f375eecff47 0x7f375eed18b1 0x1c54e134 0x257f0e92 0x257f0dc2 0x1e1a4fe8 0x1dc44bc7 0x1dc84558 0x1dc844fd 0x1dc844bd 0x1dc84495 0x1dc8445d 0x12a90069 0x12a8f195 0x1c
db2aca 0x1cdb70c6 0x1cdb4a00 0x1cdb5bf8 0x1cdb5bbd 0x1cdb5b61 0x1cdb5a72 0x1cdb5967 0x1cdb587d 0x1cdb583d 0x1cdb5815 0x1cdb57e0 0x12a90069 0x12a8f195 0x12ab5a2e 0x12abcd44 0x12abcc9d 0x12abcbc5 0x12abc4e2 0x7f375f6956db 0x7f375efb2a3f
2021.06.25 16:56:47.997902 [ 12396 ] {} <Fatal> BaseDaemon: 4. /build/glibc-2ORdQG/glibc-2.27/signal/../sysdeps/unix/sysv/linux/raise.c:51: raise @ 0x3ef47 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
2021.06.25 16:56:48.002153 [ 12396 ] {} <Fatal> BaseDaemon: 5. /build/glibc-2ORdQG/glibc-2.27/stdlib/abort.c:81: __GI_abort @ 0x408b1 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
2021.06.25 16:56:48.321740 [ 12396 ] {} <Fatal> BaseDaemon: 6. /home/alesap/code/cpp/ClickHouse/base/daemon/BaseDaemon.cpp:435: terminate_handler() @ 0x1c54e134 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:48.384684 [ 12396 ] {} <Fatal> BaseDaemon: 7. /home/alesap/code/cpp/ClickHouse/contrib/libcxxabi/src/cxa_handlers.cpp:59: std::__terminate(void (*)()) @ 0x257f0e92 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:48.447969 [ 12396 ] {} <Fatal> BaseDaemon: 8. /home/alesap/code/cpp/ClickHouse/contrib/libcxxabi/src/cxa_handlers.cpp:89: std::terminate() @ 0x257f0dc2 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:49.112745 [ 12396 ] {} <Fatal> BaseDaemon: 9. /home/alesap/code/cpp/ClickHouse/src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp:608: DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>,
std::__1::function<void (Coordination::WatchResponse const&)>) @ 0x1e1a4fe8 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:50.151217 [ 12396 ] {} <Fatal> BaseDaemon: 10. /home/alesap/code/cpp/ClickHouse/src/Storages/StorageReplicatedMergeTree.cpp:3052: DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0x1dc44bc7 in /home/alesap/code/cpp/Bu
ildCH/programs/clickhouse
2021.06.25 16:56:51.262776 [ 12396 ] {} <Fatal> BaseDaemon: 11. /home/alesap/code/cpp/ClickHouse/src/Storages/StorageReplicatedMergeTree.cpp:298: DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::
__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>,
std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams cons
t&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3::operator()() const @ 0x1dc84558 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:52.400173 [ 12396 ] {} <Fatal> BaseDaemon: 12. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1:
:char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData:
:MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3&>(fp)()) std::__1::__invoke<DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_s
tring<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1::c
har_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3&>(DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_t
raits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::
allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__
1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3&) @ 0x1dc844fd in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:53.487741 [ 12396 ] {} <Fatal> BaseDaemon: 13. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, st
d::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocato
r<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3&>(DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, s
td::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeT
reeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3&) @ 0x1dc844bd in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:54.585446 [ 12396 ] {} <Fatal> BaseDaemon: 14. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(
std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::StorageID const&, std::__1::basic_string<
char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3, void ()>::operator()() @ 0x1dc84495 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:55.697832 [ 12396 ] {} <Fatal> BaseDaemon: 15. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<c
har> > const&, bool, DB::StorageID const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::StorageInMemoryMetadata const&, std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, st
d::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::MergeTreeData::MergingParams const&, std::__1::unique_ptr<DB::MergeTreeSettings, std::__1::default_delete<DB::MergeTreeSettings> >, bool, bool)::$_3, void ()> >(std::__1::_
_function::__policy_storage const*) @ 0x1dc8445d in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:55.899475 [ 12396 ] {} <Fatal> BaseDaemon: 16. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x12a90069 in /home/alesap/code/cpp
/BuildCH/programs/clickhouse
2021.06.25 16:56:56.106791 [ 12396 ] {} <Fatal> BaseDaemon: 17. /home/alesap/code/cpp/ClickHouse/contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x12a8f195 in /home/alesap/code/cpp/BuildCH/programs
/clickhouse
2021.06.25 16:56:56.207032 [ 12396 ] {} <Fatal> BaseDaemon: 18. /home/alesap/code/cpp/ClickHouse/src/Core/BackgroundSchedulePool.cpp:106: DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x1cdb2aca in /home/alesap/code/cpp/BuildCH/programs/
clickhouse
2021.06.25 16:56:56.317005 [ 12396 ] {} <Fatal> BaseDaemon: 19. /home/alesap/code/cpp/ClickHouse/src/Core/BackgroundSchedulePool.cpp:19: DB::TaskNotification::execute() @ 0x1cdb70c6 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
2021.06.25 16:56:56.420614 [ 12396 ] {} <Fatal> BaseDaemon: 20. /home/alesap/code/cpp/ClickHouse/src/Core/BackgroundSchedulePool.cpp:265: DB::BackgroundSchedulePool::threadFunction() @ 0x1cdb4a00 in /home/alesap/code/cpp/BuildCH/programs/c
lickhouse
2021.06.25 16:56:56.532224 [ 12396 ] {} <Fatal> BaseDaemon: 21. /home/alesap/code/cpp/ClickHouse/src/Core/BackgroundSchedulePool.cpp:161: DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1::o
perator()() const @ 0x1cdb5bf8 in /home/alesap/code/cpp/BuildCH/programs/clickhouse
``` | https://github.com/ClickHouse/ClickHouse/issues/25718 | https://github.com/ClickHouse/ClickHouse/pull/26963 | ec9769525711d9d8bb381ba1628a6dba46fac81a | cae5502d51ce1179dfed788383942c5962e36420 | "2021-06-25T13:57:35Z" | c++ | "2021-07-30T08:21:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,683 | ["src/Server/MySQLHandler.cpp", "tests/integration/test_mysql_protocol/test.py"] | mysql connect clickhouse port 9004 get Segmentation fault (core dumped) | You have to provide the following information whenever possible.
#/usr/local/mysql/bin/mysql -h127.0.0.1 -P9004 -udefault
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 17
Server version: 21.6.5.37-ClickHouse
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
Segmentation fault (core dumped)
A clear and concise description of what works not as it is supposed to.
centos 7.5
mysql client version : 8.0.20, 8.0.22,8.0.25
clickhouse server verion : 21.6.4 late
**How to reproduce**
* clickhouse server verion : 21.6.4
mysql --version
mysql Ver 8.0.25 for macos11 on x86_64 (MySQL Community Server - GPL)
mysql --version
mysql Ver 8.0.23 for Linux on x86_64 (MySQL Community Server - GPL)
**Expected behavior**
A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
strace /usr/local/mysql/bin/mysql -h127.0.0.1 -P9004 -uwubx -p
munmap(0x7f5dbb4f5000, 4096) = 0
sendto(3, "\16\0\0\0\3select USER()", 18, 0, NULL, 0) = 18
recvfrom(3, "\1\0\0\1\1#\0\0\2\3def\0\0\0\rcurrentUser()\0\f"..., 16384, 0, NULL, NULL) = 131
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} ---
+++ killed by SIGSEGV +++
Segmentation fault
If applicable, add screenshots to help explain your problem.
**Additional context**
gdb ./bin/mysql ./core.18564
(gdb) bt
#0 0x00007ffff620e721 in __strlen_sse2_pminub () from /lib64/libc.so.6
#1 0x000000000043ebcc in my_strdup (key=key@entry=0, from=0x0, my_flags=my_flags@entry=16)
at /data/mysql-8.0.20/mysys/my_malloc.cc:295
#2 0x000000000040c234 in init_username () at /data/mysql-8.0.20/client/mysql.cc:5228
#3 init_username () at /data/mysql-8.0.20/client/mysql.cc:5219
#4 0x0000000000414450 in construct_prompt () at /data/mysql-8.0.20/client/mysql.cc:5131
#5 read_and_execute(bool) () at /data/mysql-8.0.20/client/mysql.cc:2216
#6 0x000000000040a357 in main () at /data/mysql-8.0.20/client/mysql.cc:1432
#7 0x00007ffff60c1555 in __libc_start_main () from /lib64/libc.so.6
#8 0x000000000040add0 in _start () at /opt/rh/devtoolset-8/root/usr/include/c++/8/new:169
//client/mysql.cc
static void init_username() {
my_free(full_username);
my_free(part_username);
MYSQL_RES *result = nullptr;
if (!mysql_query(&mysql, "select USER()")
&& (result = mysql_use_result(&mysql))) {
MYSQL_ROW cur = mysql_fetch_row(result);
full_username = my_strdup(PSI_NOT_INSTRUMENTED, cur[0], MYF(MY_WME));
part_username =
my_strdup(PSI_NOT_INSTRUMENTED, strtok(cur[0], "@"), MYF(MY_WME));
(void)mysql_fetch_row(result); // Read eof
}
}
from source and gdb trace get use mysql client connect clickhouse 9004 execute **select USER()** get 0x0
maybe clickhouse mysql protocol context->getClientInfo not implementation.
#except
please implementat clickhouse server mysql protocol context->getClientInfo
| https://github.com/ClickHouse/ClickHouse/issues/25683 | https://github.com/ClickHouse/ClickHouse/pull/25697 | 1c5987ce710a861238401a34e9dd207613f2cebc | 420839a4c0aaf980969944acc7704d53c7bb2220 | "2021-06-24T14:16:38Z" | c++ | "2021-06-26T00:31:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,632 | ["src/Common/ThreadStatus.cpp", "src/Common/checkStackSize.cpp"] | build clickhouse on mac ,Stack size too large | **To compile clickhouse on Mac, the steps are as follows:**
`$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew install cmake ninja libtool gettext
git clone --recursive [email protected]:ClickHouse/ClickHouse.git
cd ClickHouse
mkdir build
cd build
cmake .. -DCMAKE_CXX_COMPILER=`which clang++` -DCMAKE_C_COMPILER=`which clang`
ninja
cd ..
`
**Execute clickhouse-server, the error is as follows:**
`Processing configuration file 'config.xml'.
There is no file 'config.xml', will use embedded config.
Logging trace to console
2021.06.23 21:01:58.031513 [ 75789 ] {} <Information> : Starting ClickHouse 21.7.1.1 with revision 54452, no build id, PID 2615
2021.06.23 21:01:58.041521 [ 75789 ] {} <Information> Application: starting up
2021.06.23 21:01:58.041572 [ 75789 ] {} <Information> Application: OS name: Darwin, version: 19.6.0, architecture: x86_64
2021.06.23 21:01:58.041713 [ 75789 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.053761 [ 75789 ] {} <Information> StatusFile: Status file ./status already exists - unclean restart. Contents:
PID: 2605
Started at: 2021-06-23 21:00:49
Revision: 54452
2021.06.23 21:01:58.053888 [ 75789 ] {} <Debug> Application: rlimit on number of file descriptors is 547504
2021.06.23 21:01:58.053904 [ 75789 ] {} <Debug> Application: Initializing DateLUT.
2021.06.23 21:01:58.053910 [ 75789 ] {} <Trace> Application: Initialized DateLUT with time zone 'Asia/Shanghai'.
2021.06.23 21:01:58.053922 [ 75789 ] {} <Debug> Application: Setting up ./tmp/ to store temporary data in it
2021.06.23 21:01:58.053997 [ 75789 ] {} <Debug> Application: Initiailizing interserver credentials.
2021.06.23 21:01:58.056298 [ 75789 ] {} <Debug> ConfigReloader: Loading config 'config.xml'
Processing configuration file 'config.xml'.
There is no file 'config.xml', will use embedded config.
Saved preprocessed configuration to './preprocessed_configs/config.xml'.
2021.06.23 21:01:58.056663 [ 75789 ] {} <Debug> ConfigReloader: Loaded config 'config.xml', performing update on configuration
2021.06.23 21:01:58.056893 [ 75789 ] {} <Information> Application: Setting max_server_memory_usage was set to 14.40 GiB (16.00 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2021.06.23 21:01:58.057069 [ 75789 ] {} <Debug> ConfigReloader: Loaded config 'config.xml', performed update on configuration
2021.06.23 21:01:58.057153 [ 75791 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.059350 [ 75789 ] {} <Debug> ConfigReloader: Loading config 'config.xml'
Processing configuration file 'config.xml'.
There is no file 'config.xml', will use embedded config.
Saved preprocessed configuration to './preprocessed_configs/config.xml'.
2021.06.23 21:01:58.059672 [ 75789 ] {} <Debug> ConfigReloader: Loaded config 'config.xml', performing update on configuration
2021.06.23 21:01:58.060292 [ 75789 ] {} <Debug> ConfigReloader: Loaded config 'config.xml', performed update on configuration
2021.06.23 21:01:58.060561 [ 75789 ] {} <Debug> Access(user directories): Added users.xml access storage 'users.xml', path: config.xml
2021.06.23 21:01:58.060781 [ 75789 ] {} <Information> Application: Loading metadata from ./
2021.06.23 21:01:58.061208 [ 75789 ] {} <Information> DatabaseAtomic (system): Total 0 tables and 0 dictionaries.
2021.06.23 21:01:58.061221 [ 75789 ] {} <Information> DatabaseAtomic (system): Starting up tables.
2021.06.23 21:01:58.063974 [ 75789 ] {} <Information> DatabaseAtomic (default): Total 0 tables and 0 dictionaries.
2021.06.23 21:01:58.064004 [ 75789 ] {} <Information> DatabaseAtomic (default): Starting up tables.
2021.06.23 21:01:58.064172 [ 75789 ] {} <Information> DatabaseAtomic (test): Total 0 tables and 0 dictionaries.
2021.06.23 21:01:58.064183 [ 75789 ] {} <Information> DatabaseAtomic (test): Starting up tables.
2021.06.23 21:01:58.064229 [ 75789 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2021.06.23 21:01:58.064245 [ 75789 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 128 threads
2021.06.23 21:01:58.064730 [ 75811 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.064839 [ 75805 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.064934 [ 75796 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.064959 [ 75802 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065060 [ 75799 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065152 [ 75797 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065170 [ 75806 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065273 [ 75810 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065294 [ 75803 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065443 [ 75795 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065464 [ 75804 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065497 [ 75809 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065565 [ 75820 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065592 [ 75822 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065744 [ 75807 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065772 [ 75814 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065886 [ 75812 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.065930 [ 75828 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066004 [ 75834 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066033 [ 75816 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066120 [ 75832 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066149 [ 75815 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066235 [ 75819 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066260 [ 75813 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066334 [ 75829 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066456 [ 75826 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066526 [ 75801 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066626 [ 75844 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066686 [ 75851 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066756 [ 75843 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066826 [ 75827 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066841 [ 75817 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.066931 [ 75821 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067010 [ 75824 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067082 [ 75823 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067123 [ 75853 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067186 [ 75842 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067244 [ 75833 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067302 [ 75849 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067364 [ 75841 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067432 [ 75830 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067473 [ 75798 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067582 [ 75800 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067883 [ 75890 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067926 [ 75846 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067887 [ 75860 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067825 [ 75845 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067724 [ 75808 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067957 [ 75847 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067842 [ 75835 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067973 [ 75893 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067932 [ 75837 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.067978 [ 75852 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068073 [ 75818 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068226 [ 75874 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068216 [ 75838 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068278 [ 75839 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068097 [ 75850 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068246 [ 75836 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068089 [ 75825 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068312 [ 75831 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068303 [ 75840 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068288 [ 75848 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068438 [ 75866 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068559 [ 75904 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068663 [ 75861 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068797 [ 75789 ] {} <Debug> Application: Loaded metadata.
2021.06.23 21:01:58.068808 [ 75789 ] {} <Information> Application: Query Profiler and TraceCollector are disabled because they cannot work without bundled unwind (stack unwinding) library.
2021.06.23 21:01:58.068818 [ 75789 ] {} <Information> Application: Query Profiler and TraceCollector are disabled because they require PHDR cache to be created (otherwise the function 'dl_iterate_phdr' is not lock free and not async-signal safe).
2021.06.23 21:01:58.068835 [ 75789 ] {} <Information> Application: TaskStats is not implemented for this OS. IO accounting will be disabled.
2021.06.23 21:01:58.068917 [ 75891 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.068969 [ 75923 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069018 [ 75892 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069055 [ 75895 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069051 [ 75894 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069078 [ 75875 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069091 [ 75877 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069022 [ 75865 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069114 [ 75864 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069112 [ 75896 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069137 [ 75898 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069172 [ 75859 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069262 [ 75897 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069383 [ 75789 ] {} <Information> Application: Listening for http://[::1]:8123
2021.06.23 21:01:58.069276 [ 75878 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069366 [ 75900 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069424 [ 75879 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069518 [ 75899 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069524 [ 75903 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069541 [ 75902 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069405 [ 75901 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069517 [ 75880 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069395 [ 75867 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069626 [ 75854 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069654 [ 75857 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069694 [ 75905 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069677 [ 75881 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069683 [ 75906 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069682 [ 75882 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069806 [ 75868 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069931 [ 75884 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070059 [ 75855 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.069920 [ 75907 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070090 [ 75908 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070116 [ 75911 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070000 [ 75869 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070254 [ 75910 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070242 [ 75909 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070278 [ 75885 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070318 [ 75887 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070098 [ 75883 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070155 [ 75870 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070278 [ 75913 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070319 [ 75912 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070162 [ 75886 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070363 [ 75862 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070410 [ 75915 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070478 [ 75888 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070506 [ 75914 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070492 [ 75871 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070559 [ 75917 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070609 [ 75916 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070642 [ 75858 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070717 [ 75889 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070723 [ 75919 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070720 [ 75872 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070739 [ 75863 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070751 [ 75918 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070811 [ 75920 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070825 [ 75873 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070876 [ 75921 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070914 [ 75922 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070943 [ 75876 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.070947 [ 75856 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.071230 [ 75789 ] {} <Information> Application: Listening for connections with native protocol (tcp): [::1]:9000
2021.06.23 21:01:58.071430 [ 75789 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL Exception: Configuration error: no certificate file has been specified (version 21.7.1.1)
2021.06.23 21:01:58.071458 [ 75789 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 139, e.displayText() = DB::Exception: Certificate file is not set. (version 21.7.1.1)
2021.06.23 21:01:58.071464 [ 75789 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2021.06.23 21:01:58.156967 [ 75789 ] {} <Information> Application: Listening for MySQL compatibility protocol: [::1]:9004
2021.06.23 21:01:58.157066 [ 75789 ] {} <Information> Application: Listening for http://127.0.0.1:8123
2021.06.23 21:01:58.157106 [ 75789 ] {} <Information> Application: Listening for connections with native protocol (tcp): 127.0.0.1:9000
2021.06.23 21:01:58.157160 [ 75789 ] {} <Trace> MySQLHandlerFactory: Failed to create SSL context. SSL will be disabled. Error: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = SSL Exception: Configuration error: no certificate file has been specified (version 21.7.1.1)
2021.06.23 21:01:58.157177 [ 75789 ] {} <Trace> MySQLHandlerFactory: Failed to read RSA key pair from server certificate. Error: Code: 139, e.displayText() = DB::Exception: Certificate file is not set. (version 21.7.1.1)
2021.06.23 21:01:58.157182 [ 75789 ] {} <Trace> MySQLHandlerFactory: Generating new RSA key pair.
2021.06.23 21:01:58.190719 [ 75789 ] {} <Information> Application: Listening for MySQL compatibility protocol: 127.0.0.1:9004
2021.06.23 21:01:58.192686 [ 75789 ] {} <Information> DNSCacheUpdater: Update period 15 seconds
2021.06.23 21:01:58.192822 [ 75789 ] {} <Information> Application: Available RAM: 16.00 GiB; physical cores: 6; logical cores: 12.
2021.06.23 21:01:58.192807 [ 75925 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.192833 [ 75926 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.192840 [ 75924 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.192862 [ 75927 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.192847 [ 75811 ] {} <Debug> DNSResolver: Updating DNS cache
2021.06.23 21:01:58.192936 [ 75811 ] {} <Debug> DNSResolver: Updated DNS cache
2021.06.23 21:01:58.193031 [ 75928 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:01:58.193104 [ 75789 ] {} <Information> Application: Ready for connections.
2021.06.23 21:02:02.828409 [ 75792 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: [::1]:60203
2021.06.23 21:02:02.828494 [ 75792 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:02:02.828628 [ 75792 ] {} <Debug> TCPHandler: Connected ClickHouse client version 21.7.0, revision: 54449, user: default.
2021.06.23 21:02:02.829333 [ 75792 ] {} <Trace> ContextAccess (default): Settings: readonly=0, allow_ddl=true, allow_introspection_functions=false
2021.06.23 21:02:02.829640 [ 75792 ] {} <Trace> ContextAccess (default): List of all grants: GRANT ALL ON *.* WITH GRANT OPTION
2021.06.23 21:02:02.829651 [ 75792 ] {} <Trace> ContextAccess (default): List of all grants including implicit: GRANT ALL ON *.* WITH GRANT OPTION
2021.06.23 21:02:02.830153 [ 75793 ] {} <Trace> TCPHandlerFactory: TCP Request. Address: [::1]:60204
2021.06.23 21:02:02.830225 [ 75793 ] {} <Warning> ThreadStatus: Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
2021.06.23 21:02:02.830275 [ 75793 ] {} <Debug> TCPHandler: Connected ClickHouse client version 21.7.0, revision: 54449, user: default.
2021.06.23 21:02:02.832660 [ 75793 ] {213681aa-79f1-4869-aeae-b2b07d23ee30} <Debug> executeQuery: (from [::1]:60204, using production parser) SELECT DISTINCT arrayJoin(extractAll(name, '[\\w_]{2,}')) AS res FROM (SELECT name FROM system.functions UNION ALL SELECT name FROM system.table_engines UNION ALL SELECT name FROM system.formats UNION ALL SELECT name FROM system.table_functions UNION ALL SELECT name FROM system.data_type_families UNION ALL SELECT name FROM system.merge_tree_settings UNION ALL SELECT name FROM system.settings UNION ALL SELECT cluster FROM system.clusters UNION ALL SELECT macro FROM system.macros UNION ALL SELECT policy_name FROM system.storage_policies UNION ALL SELECT concat(func.name, comb.name) FROM system.functions AS func CROSS JOIN system.aggregate_function_combinators AS comb WHERE is_aggregate UNION ALL SELECT name FROM system.databases LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.tables LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.dictionaries LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.columns LIMIT 10000) WHERE notEmpty(res)
2021.06.23 21:02:02.834377 [ 75793 ] {213681aa-79f1-4869-aeae-b2b07d23ee30} <Error> executeQuery: Code: 306, e.displayText() = DB::Exception: Stack size too large. Stack address: 0x700001a58000, frame address: 0x700001a53f40, stack size: 540864, maximum stack size: 524288 (version 21.7.1.1) (from [::1]:60204) (in query: SELECT DISTINCT arrayJoin(extractAll(name, '[\\w_]{2,}')) AS res FROM (SELECT name FROM system.functions UNION ALL SELECT name FROM system.table_engines UNION ALL SELECT name FROM system.formats UNION ALL SELECT name FROM system.table_functions UNION ALL SELECT name FROM system.data_type_families UNION ALL SELECT name FROM system.merge_tree_settings UNION ALL SELECT name FROM system.settings UNION ALL SELECT cluster FROM system.clusters UNION ALL SELECT macro FROM system.macros UNION ALL SELECT policy_name FROM system.storage_policies UNION ALL SELECT concat(func.name, comb.name) FROM system.functions AS func CROSS JOIN system.aggregate_function_combinators AS comb WHERE is_aggregate UNION ALL SELECT name FROM system.databases LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.tables LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.dictionaries LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.columns LIMIT 10000) WHERE notEmpty(res)), Stack trace (when copying this message, always include the lines below):
<Empty trace>
2021.06.23 21:02:02.834456 [ 75793 ] {213681aa-79f1-4869-aeae-b2b07d23ee30} <Error> TCPHandler: Code: 306, e.displayText() = DB::Exception: Stack size too large. Stack address: 0x700001a58000, frame address: 0x700001a53f40, stack size: 540864, maximum stack size: 524288, Stack trace:
<Empty trace>
2021.06.23 21:02:02.834494 [ 75793 ] {213681aa-79f1-4869-aeae-b2b07d23ee30} <Debug> MemoryTracker: Peak memory usage (for query): 0.00 B.
2021.06.23 21:02:02.834504 [ 75793 ] {} <Debug> TCPHandler: Processed in 0.003573 sec.
2021.06.23 21:02:02.835023 [ 75793 ] {} <Debug> TCPHandler: Done processing connection.
2021.06.23 21:02:13.193784 [ 75811 ] {} <Debug> DNSResolver: Updating DNS cache
2021.06.23 21:02:13.193856 [ 75811 ] {} <Debug> DNSResolver: Updated DNS cache
^C2021.06.23 21:02:15.645344 [ 75790 ] {} <Trace> BaseDaemon: Received signal 2
2021.06.23 21:02:15.645422 [ 75790 ] {} <Information> Application: Received termination signal (Interrupt: 2)
2021.06.23 21:02:15.645446 [ 75789 ] {} <Debug> Application: Received termination signal.
2021.06.23 21:02:15.645463 [ 75789 ] {} <Debug> Application: Waiting for current connections to close.
2021.06.23 21:02:16.875950 [ 75789 ] {} <Information> Application: Closed all listening sockets. Waiting for 1 outstanding connections.
^C2021.06.23 21:02:17.870402 [ 75790 ] {} <Trace> BaseDaemon: Received signal 2
2021.06.23 21:02:17.870432 [ 75790 ] {} <Information> Application: Received termination signal (Interrupt: 2)
2021.06.23 21:02:17.870461 [ 75790 ] {} <Information> Application: Received second signal Interrupt. Immediately terminate.`
**Execute clickhouse-client, the error is as follows:**
`ClickHouse client version 21.7.1.1.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.7.1 revision 54449.
qihoodemacbook-pro.local :) Cannot load data for command line suggestions: Code: 306, e.displayText() = DB::Exception: Received from localhost:9000. DB::Exception: Stack size too large. Stack address: 0x700001a58000, frame address: 0x700001a53f40, stack size: 540864, maximum stack size: 524288. (version 21.7.1.1)`
How should i solve this problem?
| https://github.com/ClickHouse/ClickHouse/issues/25632 | https://github.com/ClickHouse/ClickHouse/pull/25654 | 4140dbf870132c3fc574b7d7e4146eed68cda331 | 89007a8b6a2cebe1dd3170b4527afc7ae29f09f3 | "2021-06-23T13:10:07Z" | c++ | "2021-06-24T16:22:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,611 | ["tests/queries/0_stateless/01926_union_all_schmak.reference", "tests/queries/0_stateless/01926_union_all_schmak.sql"] | UNION with JOIN + WHERE query fails in 21.5+ | **Describe the bug**
The following query fails in 21.5 and 21.6 releases.
It worked fine in 21.4.
**Does it reproduce on recent release?**
Yes
**How to reproduce**
```
SELECT * FROM (
SELECT 1 AS a, 2 AS b FROM system.one
JOIN system.one USING dummy
UNION ALL
SELECT 3 AS a, 4 AS b FROM system.one
)
WHERE a != 10
```
**Expected behavior**
Query should run without errors
**Error message and/or stacktrace**
```
Code: 49, e.displayText() = DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different number of columns:
a UInt8 UInt8(size = 0), b UInt8 UInt8(size = 0), dummy UInt8 UInt8(size = 0)
a UInt8 UInt8(size = 0), b UInt8 UInt8(size = 0) (version 21.6.5.37 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/25611 | https://github.com/ClickHouse/ClickHouse/pull/25831 | bba184a432ed08659795d78ce68ebaf3c21593ed | 3ebe43583f098c556ef28a44ac4150afab4c9f13 | "2021-06-23T06:33:29Z" | c++ | "2021-06-30T08:04:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,594 | ["src/DataTypes/NestedUtils.cpp", "tests/queries/0_stateless/01937_nested_chinese.reference", "tests/queries/0_stateless/01937_nested_chinese.sql"] | No columns in nested table ERROR in Nested Type with Chinese field names | You have to provide the following information whenever possible.
**Describe the bug**
When use chinese as field name for nested type and query:
````
SELECT * from table array join `xxxx`;
````
it will raise error:
No columns in nested table xxx
**Does it reproduce on recent release?**
ClickHouse server version 21.3.5 revision 54447.
**How to reproduce**
1. create table
````
CREATE TABLE tmp.test1 (`id` String, `products` Nested (`产品` Array(String), `销量` Array(Int32))) ENGINE = ReplacingMergeTree ORDER BY id
````
2. query
````
SELECT * FROM tmp.test2 array join products;
````
Error:
````
Received exception from server (version 21.3.5):
Code: 208. DB::Exception: Received from localhost:9000. DB::Exception: No columns in nested table products.
````
**Additional Information**
I've tried using english character as field name, and it works. | https://github.com/ClickHouse/ClickHouse/issues/25594 | https://github.com/ClickHouse/ClickHouse/pull/25923 | 38d1ce310d9ff824fc38143ab362460b2b83ab7d | f238d98fd04a88e4fbf27452a4729824d9b6ccc7 | "2021-06-22T14:29:33Z" | c++ | "2021-07-03T11:51:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,578 | ["src/Interpreters/HashJoin.cpp", "src/Interpreters/HashJoin.h", "src/Interpreters/TableJoin.cpp", "src/Interpreters/TableJoin.h", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/join_common.cpp", "tests/queries/0_stateless/02000_join_on_const.reference", "tests/queries/0_stateless/02000_join_on_const.sql", "tests/queries/0_stateless/02001_join_on_const_bs_long.reference", "tests/queries/0_stateless/02001_join_on_const_bs_long.sql.j2"] | Support `LEFT JOIN ... ON 1 = 1` expression | **Usage**
Postgres, MySQL, and Oracle support the expression `LEFT JOIN ... ON 1 = 1`, but ClickHouse does not.
It may be possible to replace `LEFT JOIN ... ON 1 = 1` with `CROSS JOIN`, but I want to use the query in the same form for lots of databases.
Please support the expression `LEFT JOIN ... ON 1 = 1`.
**Detailed SQL**
```SQL
CREATE TABLE test
(
`name` Nullable(String),
`id` int
)
ENGINE = Log();
insert into test values ('a', 1) ('b', 2) (null, 3);
select * from test t1 left join (select * from test ) t2 on 1 = 1
-- DB::Exception: Not equi-join ON expression: 1 = 1. No columns in one of equality side.: While processing 1 = 1.
```
| https://github.com/ClickHouse/ClickHouse/issues/25578 | https://github.com/ClickHouse/ClickHouse/pull/25894 | 6f369319c8db770c3e7ed88c1342c18eb598927b | a9689c0f2780df167ab2fcd7652eaef24a333177 | "2021-06-22T06:28:54Z" | c++ | "2021-11-08T12:52:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,512 | ["contrib/libunwind"] | ClickHouse-21.3.12.2-lts builds and starts ok on aarch64 , but it's server process exits once some operations excuted,terminate called for uncaught exception. | ### Informations
**ClickHouse Version**
ClickHouse-21.3.12.2-lts:
https://github.com/ClickHouse/ClickHouse/tree/v21.3.12.2-lts
**Operating system**
CentOS7.9,4.18.0-193.28.1.el7.aarch64 #1 SMP Wed Oct 21 16:25:35 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
**Cmake version**
3.18.4
**Ninja version**
use make3.82 instead of ninja
**Compiler name and version**
clang11
### Logs
**Full cmake and/or ninja output**
[https://raw.githubusercontent.com/huangmr0811/ck_build/main/build_records.log](url)
**Clickhouse-21.3.12.2-lts Server Run Logs**
[https://github.com/huangmr0811/ck_build/blob/9a8f6020896c0aaf24b2ac40612f53e8b2101512/clickhouse_server_start.log](url)
### Supplementary notes
I build and run ClickHouse-21.3.12.2-lts on an aarch64 server with 3 kinds of linux, and the results:
**1. build in docker container(ubuntu20.04) which run on kylinV10( similar to Centos 7.6), run on kylinV10 system(not in docker containers) or in container(ubuntu20.04) on kylinV10 system:**
build success,start success,but terminate called for uncaught exception after some operations, and I haven't discovered intrinsic features from these operations, these operations using either dbeaver or clickhouse-client can both result to terminate clickhouse daemon .
**2. build in docker container(ubuntu20.04) which run on kylinV10, run on ubuntu20.04 system:**
build success,start success,and will not terminate after common SQLs.
**3. build on centos7.9(not in docker containers) ,run also on centos7.9 (as this issue described ):**
build success,start success,but terminate called for uncaught exception after some operations, and I haven't discovered intrinsic features from these operations.
### another supplement
I have also build ClickHouse-20.3.19.4-lts with same built env and similar built process,and the result:
**1. build in docker container(ubuntu20.04) which run on kylinV10( similar to Centos 7.6), run on kylinV10 system(not in docker containers):**
build success,start success,and will not terminate after common SQLs. | https://github.com/ClickHouse/ClickHouse/issues/25512 | https://github.com/ClickHouse/ClickHouse/pull/25854 | 956b1f588dbf5e406bf61e8a9b4e329d35af8b70 | 011ed015fa49c8e1a37f6f103c28def5e637a23f | "2021-06-20T05:04:42Z" | c++ | "2021-06-30T13:18:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,422 | ["src/Common/hex.cpp", "src/Common/hex.h", "src/Functions/FunctionsCoding.cpp", "src/Functions/FunctionsCoding.h", "tests/queries/0_stateless/01926_bin_unbin.reference", "tests/queries/0_stateless/01926_bin_unbin.sql"] | Add functions `bin` and `unbin` | **Use case**
Viewing binary data such as bitmasks, hashes, internal representation of IEEE-754 numbers, etc.
Mostly for debugging.
**Describe the solution you'd like**
Similar to `hex`/`unhex`.
`bin` - returns a string with `0` and `1`. Bits are in "human" order, most significant first. Most significant zero bits are not printed (except the case with single zero).
**Describe alternatives you've considered**
Now I have to run interactive `python` from command line, but I will be more happy using ClickHouse.
**Caveats**
The name `bin` may suggest that the function will "split data into bins" but it is completely unrelated. | https://github.com/ClickHouse/ClickHouse/issues/25422 | https://github.com/ClickHouse/ClickHouse/pull/25609 | 80eaf8530190eae9f43c84c0faa394517d5098af | b46ac3dfd107c5aff0fe55c6bde36fd79dd12615 | "2021-06-17T17:40:56Z" | c++ | "2021-07-07T06:36:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,368 | ["src/Interpreters/ActionsDAG.cpp", "tests/queries/0_stateless/01913_join_push_down_bug.reference", "tests/queries/0_stateless/01913_join_push_down_bug.sql"] | Incorrect results (ignoring WHERE) when using JOIN combined with NOT IN | I get incorrect results as if particular WHERE condition is ignored when there is JOIN and NOT IN condition (empty set) in a query. I've created as minimal example as I could.
```
# Test dataset
CREATE TABLE test
(
`t` UInt8,
`flag` UInt8,
`id` UInt8
)
ENGINE = MergeTree
PARTITION BY t
ORDER BY (t, id)
SETTINGS index_granularity = 8192;
INSERT INTO test VALUES (1,0,1),(1,0,2),(1,0,3),(1,0,4),(1,0,5),(1,0,6),(1,1,7),(0,0,7);
# Explore dataset
SELECT * FROM test
┌─t─┬─flag─┬─id─┐
│ 0 │ 0 │ 7 │
└───┴──────┴────┘
┌─t─┬─flag─┬─id─┐
│ 1 │ 0 │ 1 │
│ 1 │ 0 │ 2 │
│ 1 │ 0 │ 3 │
│ 1 │ 0 │ 4 │
│ 1 │ 0 │ 5 │
│ 1 │ 0 │ 6 │
│ 1 │ 1 │ 7 │
└───┴──────┴────┘
# Problematic query
SELECT id, flag FROM test t1
INNER JOIN (SELECT DISTINCT id FROM test) AS t2 ON t1.id = t2.id
WHERE flag = 0 and t = 1 AND id NOT IN (SELECT 1 WHERE 0)
┌─id─┬─flag─┐
│ 1 │ 0 │
│ 2 │ 0 │
│ 3 │ 0 │
│ 4 │ 0 │
│ 5 │ 0 │
│ 6 │ 0 │
│ 7 │ 1 │ <---- this row should NOT be in result set because of flag = 0 condition in WHERE
└────┴──────┘
# If I remove NOT IN part (which should not affect anything because it is an empty set) I got correct results
SELECT id, flag FROM test t1
INNER JOIN (SELECT DISTINCT id FROM test) AS t2 ON t1.id = t2.id
WHERE flag = 0 and t = 1
┌─id─┬─flag─┐
│ 1 │ 0 │
│ 2 │ 0 │
│ 3 │ 0 │
│ 4 │ 0 │
│ 5 │ 0 │
│ 6 │ 0 │
└────┴──────┘
```
Original query was a way more complex, didn't use self join and NOT IN wasn't empty set, I've boiled it down to above simple example while trying to figure out why it's happening. It seems that all things matters here: table and query structure and dataset.
Yes, it is reproducing in current release with default settings.
21.6.4.26 - reproduced (current release)
21.5.6.6 - reproduced
Does NOT reproduce in 20.10.3.30 | https://github.com/ClickHouse/ClickHouse/issues/25368 | https://github.com/ClickHouse/ClickHouse/pull/25370 | 1de01ae722ccf53be9a36ec2638e662311f2b476 | 669b8a8b963a751d81cee54bc481ce6580aa630f | "2021-06-16T16:25:36Z" | c++ | "2021-06-17T20:07:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,293 | ["src/AggregateFunctions/AggregateFunctionSumMap.h", "tests/queries/0_stateless/01280_min_map_max_map.reference", "tests/queries/0_stateless/01280_min_map_max_map.sql"] | AST fuzzer (debug) — Logical error: 'Cannot sum Tuples'. | https://clickhouse-test-reports.s3.yandex.net/24341/038e4c3ed1881b4dbf4e21cb8992b43cb12f5756/fuzzer_debug/report.html#fail1
Query to reproduce:
```sql
SELECT minMap([1, 1], [[1], [1]])
``` | https://github.com/ClickHouse/ClickHouse/issues/25293 | https://github.com/ClickHouse/ClickHouse/pull/25298 | 779f5eca9449e209fff928925fe5ad84a16f349b | 6b264618aaa0828facdcddf0dd2bb037eea2ba40 | "2021-06-15T12:48:01Z" | c++ | "2021-06-16T09:26:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,259 | ["src/AggregateFunctions/AggregateFunctionTopK.h", "src/Common/SpaceSaving.h", "tests/queries/0_stateless/01910_memory_tracking_topk.reference", "tests/queries/0_stateless/01910_memory_tracking_topk.sql"] | The query with topK leads to OOM | `SELECT toDateTime(topKWeighted(-9223372036854775807, '-0.02')(NULL), '104857.5') FROM remote('127.0\00\0{2,3}127.0\00\0{2,3}127.0\00\0{2,3}127.0\00\0{2,3}127.0\00\0{2,3}127.0\00\0{2,3}127.0\00\0{2,3}127.0\00\0{2,3}', system.one) ORDER BY length(topK(1048577)(tuple(NULL))) ASC, '0' ASC NULLS LAST` | https://github.com/ClickHouse/ClickHouse/issues/25259 | https://github.com/ClickHouse/ClickHouse/pull/25260 | 8a267775c27b8d7d69aa11969bdfc9439137cef6 | 2370c8b3a1c74cbaa042946b640d72d1d79102db | "2021-06-14T01:27:21Z" | c++ | "2021-06-14T21:41:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,187 | ["src/Functions/in.cpp", "tests/queries/0_stateless/01906_lc_in_bug.reference", "tests/queries/0_stateless/01906_lc_in_bug.sql"] | PREWHERE and LowCardinality IN set do not work together | ```
milovidov-desktop :) SELECT * FROM (SELECT 'Hello' AS x) PREWHERE toLowCardinality(x) IN ('Hello', 'World')
SELECT *
FROM
(
SELECT 'Hello' AS x
)
PREWHERE toLowCardinality(x) IN ('Hello', 'World')
Query id: 58d92e01-c67e-4bcc-8b0c-788bb5e1287e
0 rows in set. Elapsed: 0.037 sec.
Received exception from server (version 21.7.1):
Code: 182. DB::Exception: Received from localhost:9000. DB::Exception: Illegal PREWHERE.
milovidov-desktop :) SELECT * FROM (SELECT toLowCardinality('Hello') AS x) PREWHERE x IN ('Hello', 'World')
SELECT *
FROM
(
SELECT toLowCardinality('Hello') AS x
)
PREWHERE x IN ('Hello', 'World')
Query id: 694bbad9-d228-47b8-ad60-c037005c7ee2
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.7.1):
Code: 182. DB::Exception: Received from localhost:9000. DB::Exception: Illegal PREWHERE.
milovidov-desktop :) SELECT * FROM (SELECT toLowCardinality(materialize('Hello')) AS x) PREWHERE x IN ('Hello', 'World')
SELECT *
FROM
(
SELECT toLowCardinality(materialize('Hello')) AS x
)
PREWHERE x IN ('Hello', 'World')
Query id: fc271924-c17c-457c-8cf8-21d0515388b7
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.7.1):
Code: 182. DB::Exception: Received from localhost:9000. DB::Exception: Illegal PREWHERE.
``` | https://github.com/ClickHouse/ClickHouse/issues/25187 | https://github.com/ClickHouse/ClickHouse/pull/25290 | 3a596fb4e5519514eb1ee37433254656cd5f3beb | 16675a4f4ebcff43ce91e56be75d76a1c2fe1801 | "2021-06-10T23:26:49Z" | c++ | "2021-06-16T12:37:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,134 | ["src/Processors/Merges/Algorithms/SummingSortedAlgorithm.cpp", "tests/queries/0_stateless/01532_execute_merges_on_single_replica.sql", "tests/queries/0_stateless/01913_summing_mt_and_simple_agg_function_with_lc.reference", "tests/queries/0_stateless/01913_summing_mt_and_simple_agg_function_with_lc.sql"] | crash SimpleAggregateFunction + LowCardinality(String) | ```
CREATE TABLE smta
(
`k` Int64,
`a` AggregateFunction(max, Int64),
`city` SimpleAggregateFunction(max, LowCardinality(String))
)
ENGINE = SummingMergeTree
ORDER BY k;
insert into smta(k, city) values (1, 'x');
BaseDaemon: ########################################
BaseDaemon: (version 21.7.1.7020 (official build), build id: F609E0F0A9A1245F3A29AA02CB5A58D597BFA40D) (from thread 2
BaseDaemon: Address: 0x1 Access: read. Address not mapped to object.
BaseDaemon: Stack trace: 0x8d33350 0x10855b5b 0x1085a947 0x1040e6d8 0x1040f628 0x102fbcca 0xfa627b2 0xfa6c46b 0xfa6ca
BaseDaemon: 1. memcpy @ 0x8d33350 in /usr/bin/clickhouse
BaseDaemon: 2. DB::SummingSortedAlgorithm::SummingMergedData::addRowImpl(std::__1::vector<DB::IColumn const*, std::__
BaseDaemon: 3. DB::SummingSortedAlgorithm::merge() @ 0x1085a947 in /usr/bin/clickhouse
BaseDaemon: 4. DB::MergeTreeDataWriter::mergeBlock(DB::Block const&, std::__1::vector<DB::SortColumnDescription, std:
ar> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >&
BaseDaemon: 5. DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemo
BaseDaemon: 6. DB::MergeTreeBlockOutputStream::write(DB::Block const&) @ 0x102fbcca in /usr/bin/clickhouse
BaseDaemon: 7. DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0xfa627b2 in /usr/bin/clickhouse
BaseDaemon: 8. DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0xfa6c46b in /usr/bin/clickhouse
BaseDaemon: 9. DB::SquashingBlockOutputStream::finalize() @ 0xfa6caec in /usr/bin/clickhouse
BaseDaemon: 10. DB::SquashingBlockOutputStream::writeSuffix() @ 0xfa6cb89 in /usr/bin/clickhouse
BaseDaemon: 11. DB::TCPHandler::processInsertQuery(DB::Settings const&) @ 0x106470ef in /usr/bin/clickhouse
BaseDaemon: 12. DB::TCPHandler::runImpl() @ 0x1063f989 in /usr/bin/clickhouse
BaseDaemon: 13. DB::TCPHandler::run() @ 0x10652679 in /usr/bin/clickhouse
BaseDaemon: 14. Poco::Net::TCPServerConnection::start() @ 0x132f6e2f in /usr/bin/clickhouse
BaseDaemon: 15. Poco::Net::TCPServerDispatcher::run() @ 0x132f88ba in /usr/bin/clickhouse
BaseDaemon: 16. Poco::PooledThread::run() @ 0x1342c3f9 in /usr/bin/clickhouse
BaseDaemon: 17. Poco::ThreadImpl::runnableEntry(void*) @ 0x1342868a in /usr/bin/clickhouse
BaseDaemon: 18. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
BaseDaemon: 19. clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
BaseDaemon: Checksum of the binary: 368AF822681E77D6553F41BA8FB4AD91, integrity check passed.
``` | https://github.com/ClickHouse/ClickHouse/issues/25134 | https://github.com/ClickHouse/ClickHouse/pull/25300 | d8174cb9fd45f2ad1f5f4af1c0c490c195bdbac6 | 496aff211873cbb8ee5879a9474640298d8a6b31 | "2021-06-09T16:33:56Z" | c++ | "2021-06-17T17:18:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,129 | ["src/Processors/Formats/IInputFormat.h", "src/Processors/Formats/Impl/CSVRowInputFormat.cpp", "src/Processors/Formats/Impl/CSVRowInputFormat.h", "tests/queries/0_stateless/01903_csvwithnames_subset_of_columns.reference", "tests/queries/0_stateless/01903_csvwithnames_subset_of_columns.sh"] | Input format `CSVWithNames` not working properly without all columns in data | ClickHouse version: 21.4.7.3
Ubuntu: 18.04.4
**Describe the bug**
Data insertion using `CSVWithNames` not working properly if `Nullable` columns aren't presented in input data.
Seems ClickHouse adds a full column list to insert query itself if trying to insert data from stdin with the query `INSERT INTO test FORMAT CSVWithNames` without specifying the exact columns list. ClickHouse inserts some part of data before error raises. But strictly specifying columns list from input data makes it work.
**Error**
```
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x87e86ca in /usr/bin/clickhouse
1. DB::Chunk::checkNumRowsIsConsistent() @ 0xff10545 in /usr/bin/clickhouse
2. DB::Chunk::Chunk(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >, unsigned long) @ 0xff109f6 in /usr/bin/clickhouse
3. DB::IRowInputFormat::generate() @ 0xff89925 in /usr/bin/clickhouse
4. DB::ISource::tryGenerate() @ 0xff18685 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0xff1827a in /usr/bin/clickhouse
6. DB::ParallelParsingInputFormat::InternalParser::getChunk() @ 0xffdf07e in /usr/bin/clickhouse
7. DB::ParallelParsingInputFormat::parserThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xffde5ee in /usr/bin/clickhouse
8. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x882b2f8 in /usr/bin/clickhouse
9. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() @ 0x882d2bf in /usr/bin/clickhouse
10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x882883f in /usr/bin/clickhouse
11. ? @ 0x882c363 in /usr/bin/clickhouse
12. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
13. clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 21.4.7.3 (official build))
Code: 49. DB::Exception: Invalid number of rows in Chunk column Nullable(UInt8) position 1: expected 193403, got 0: data for INSERT was parsed from stdin
```
**Logs**
No new logs appear in `clickhouse-server.err.log`.
But `clickhouse-server.log` shows that ClickHouse specifying full column list itself:
```
<Trace> ContextAccess (default): Access granted: INSERT(col0, col1) ON db.test;
```
**How to reproduce**
* ClickHouse: 21.4.7.3
* Client: clickhouse-client
* Default settings
```
CREATE TABLE db.test (
col0 Date,
col1 Nullable(UInt8)
)
ENGINE MergeTree()
PARTITION BY toYYYYMM(col0)
ORDER BY col0;
```
Command to insert 1.000.000 rows into `test` table only with `col0` column:
```
(echo col0; for _ in `seq 1 1000000`; do echo '2021-05-05'; done) | clickhouse-client -q "INSERT INTO test FORMAT CSVWithNames"
```
Commands to insert 1.000.000 rows into `test` that works:
All columns from table are presented in input data
```
(echo col0,col1; for _ in `seq 1 1000000`; do echo '2021-05-05',1; done) | clickhouse-client -q "INSERT INTO test FORMAT CSVWithNames"
```
Only one column presented in input data but the column list is strictly specified in the insert query.
```
(echo col0; for _ in `seq 1 1000000`; do echo '2021-05-05'; done) | clickhouse-client -q "INSERT INTO test (col0) FORMAT CSVWithNames"
```
**Expected behavior**
`*WithNames` formats should work without specifying the actual column list from input data even if input data doesn't have all table columns.
| https://github.com/ClickHouse/ClickHouse/issues/25129 | https://github.com/ClickHouse/ClickHouse/pull/25169 | 54f9db670ffbbae82d482b32998d216dc81548cb | e99662c68ef8799be6f09e0ca64407fe6356acbf | "2021-06-09T14:01:46Z" | c++ | "2021-06-11T07:42:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,070 | ["src/Common/FieldVisitors.h", "src/Storages/MergeTree/MergeTreePartition.cpp", "tests/queries/0_stateless/01891_partition_by_uuid.reference", "tests/queries/0_stateless/01891_partition_by_uuid.sql", "tests/queries/0_stateless/01891_partition_hash.reference", "tests/queries/0_stateless/01891_partition_hash.sql", "tests/queries/0_stateless/01891_partition_hash_no_long_int.reference", "tests/queries/0_stateless/01891_partition_hash_no_long_int.sql"] | Data loss after upgrading | After upgrading clickhouse client and server from version 21.5.6.6 to version 21.6.3.14, I've lost all data stored prior to the update (or at least, access to this data).
All my table definitions are intact, but data from before the upgrade is not available.
The clickhouse-server.log file has a lot of errors of the type "While loading part <path> calculated partition ID: <some-id> differs from partition ID in part name: <other-id>". For example:
```
2021.06.06 04:35:58.031714 [ 2076 ] {} <Error> auto DB::MergeTreeData::loadDataParts(bool)::(anonymous class)::operator()() const: Code: 246, e.displayText() = DB::Exception: While loading part /datadisk/clickhouse/store/1af/1afedfe1-c1e7-4382-9a39-6b81acd36f2c/7602567da462c2a7f2677c06a783bf1c_234_234_0/: calculated partition ID: 61df88718388867b629412876782fda3 differs from partition ID in part name: 7602567da462c2a7f2677c06a783bf1c, Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8b5b17a in /usr/bin/clickhouse
1. DB::IMergeTreeDataPart::loadPartitionAndMinMaxIndex() @ 0x100992a0 in /usr/bin/clickhouse
2. ? @ 0x1011ef1c in /usr/bin/clickhouse
3. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8b9df58 in /usr/bin/clickhouse
4. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()&&...)::'lambda'()::operator()() @ 0x8b9f91f in /usr/bin/clickhouse
5. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8b9b49f in /usr/bin/clickhouse
6. ? @ 0x8b9e9c3 in /usr/bin/clickhouse
7. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
8. __clone @ 0x12171f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 21.6.3.14 (official build))
```
I've noticed I have some data stored in a /detached/ folder. When I try to attach that with the "alter table X attach partition id 'Y'" syntax, I get the same error that the calculated partition ID differs from partition ID in part name.
What can have caused this? Is there anyway to get the data back?
Additional information:
- my tables are using the MergeTree engine, while some materialized views use AggregatingMergeTree and SummingMergeTree. All of them have lost older data.
- In the system tables, some tables have data from before the upgrade, while others seem to have been truncated when upgrading (e.g. system.trace_log includes old data, while system.errors only has data from after the upgrade). I don't know if this is by design or not.
- my best guess is that the upgrade caused this, but for all I know it is possible that some corruption happened before this, and the upgrade made it visible
- the partitioning key for all my tables is of type (UUID, UInt32) (e.g PARTITION BY (Id, toYYYYMM(timestamp)) )
| https://github.com/ClickHouse/ClickHouse/issues/25070 | https://github.com/ClickHouse/ClickHouse/pull/25127 | 9656c67853e3bd4fa710e1139df22ec1b00c6788 | 0636f4034915e6201d8bd017519a30eee59d4d33 | "2021-06-08T08:06:49Z" | c++ | "2021-06-09T23:31:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,044 | ["contrib/libunwind"] | arm64 ClickHouse start error | I user the docker image : altinity/clickhouse-server tag:21.6.1.6734-testing-arm imageID is:4613c0e98b40
and the container startup command is:
```
docker run -it --name clickhouse --ulimit nofile=262144:262144 -p 8123:8123 -p 9000:9000 -p 9009:9009 altinity/clickhouse-server:21.6.1.6734-testing-arm
the log file had fatal log ,Full log is:
2021.06.07 11:32:29.408531 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
2021.06.07 11:32:29.419995 [ 1 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2021.06.07 11:32:29.492856 [ 1 ] {} <Information> : Starting ClickHouse 21.6.1.6734 with revision 54451, build id: 2DF0BB0E45DEF2FAEE1C279774309D1CA1C1BC1C, PID 1
2021.06.07 11:32:29.493019 [ 1 ] {} <Information> Application: starting up
2021.06.07 11:32:29.587308 [ 1 ] {} <Warning> Application: Calculated checksum of the binary: 1CFD6734176B7CA8D26B342372E4DDE1. There is no information about the reference checksum.
2021.06.07 11:32:29.587437 [ 1 ] {} <Information> Application: It looks like the process has no CAP_IPC_LOCK capability, binary mlock will be disabled. It could happen due to incorrect ClickHouse package installation. You could
resolve the problem manually with 'sudo setcap cap_ipc_lock=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021.06.07 11:32:29.587548 [ 1 ] {} <Information> StatusFile: Status file /var/lib/clickhouse/status already exists - unclean restart. Contents:
PID: 1
Started at: 2021-06-07 11:29:54
Revision: 54451
2021.06.07 11:32:29.587668 [ 1 ] {} <Debug> Application: rlimit on number of file descriptors is 262144
2021.06.07 11:32:29.587691 [ 1 ] {} <Debug> Application: Initializing DateLUT.
2021.06.07 11:32:29.587706 [ 1 ] {} <Trace> Application: Initialized DateLUT with time zone 'UTC'.
2021.06.07 11:32:29.587735 [ 1 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2021.06.07 11:32:29.588253 [ 1 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'dbac9366f263' as replica host.
2021.06.07 11:32:29.588295 [ 1 ] {} <Debug> Application: Initiailizing interserver credentials.
2021.06.07 11:32:29.588466 [ 1 ] {} <Information> SensitiveDataMaskerConfigRead: 1 query masking rules loaded.
2021.06.07 11:32:29.589158 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2021.06.07 11:32:29.592893 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performing update on configuration
2021.06.07 11:32:29.594490 [ 1 ] {} <Information> Application: Setting max_server_memory_usage was set to 56.10 GiB (62.33 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2021.06.07 11:32:29.596280 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performed update on configuration
2021.06.07 11:32:29.597066 [ 1 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2021.06.07 11:32:29.597773 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2021.06.07 11:32:29.598431 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2021.06.07 11:32:29.598841 [ 1 ] {} <Debug> Access(user directories): Added users.xml access storage 'users.xml', path: /etc/clickhouse-server/users.xml
2021.06.07 11:32:29.599034 [ 1 ] {} <Debug> Access(user directories): Added local directory access storage 'local directory', path: /var/lib/clickhouse/access/
2021.06.07 11:32:29.599453 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2021.06.07 11:32:29.601931 [ 1 ] {} <Information> DatabaseAtomic (system): Total 2 tables and 0 dictionaries.
2021.06.07 11:32:29.602686 [ 54 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Loading data parts
2021.06.07 11:32:29.604297 [ 55 ] {} <Debug> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Loading data parts
2021.06.07 11:32:29.604441 [ 54 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Loaded data parts (7 items)
2021.06.07 11:32:29.606016 [ 55 ] {} <Debug> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Loaded data parts (7 items)
2021.06.07 11:32:29.606170 [ 1 ] {} <Information> DatabaseAtomic (system): Starting up tables.
2021.06.07 11:32:29.606209 [ 54 ] {} <Trace> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Found 4 old parts to remove.
2021.06.07 11:32:29.606209 [ 55 ] {} <Trace> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Found 1 old parts to remove.
2021.06.07 11:32:29.606240 [ 54 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Removing part from filesystem all_1_1_0
2021.06.07 11:32:29.606250 [ 55 ] {} <Debug> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Removing part from filesystem 202106_1_10_2
2021.06.07 11:32:29.606550 [ 54 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Removing part from filesystem all_2_2_0
2021.06.07 11:32:29.606743 [ 55 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 16 threads
2021.06.07 11:32:29.606767 [ 54 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Removing part from filesystem all_3_3_0
2021.06.07 11:32:29.607004 [ 54 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Removing part from filesystem all_4_4_0
2021.06.07 11:32:29.629090 [ 1 ] {} <Information> DatabaseAtomic (default): Total 0 tables and 0 dictionaries.
2021.06.07 11:32:29.629124 [ 1 ] {} <Information> DatabaseAtomic (default): Starting up tables.
2021.06.07 11:32:29.629186 [ 1 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2021.06.07 11:32:29.629210 [ 1 ] {} <Debug> Application: Loaded metadata.
2021.06.07 11:32:29.629227 [ 1 ] {} <Information> Application: Query Profiler is only tested on x86_64. It also known to not work under qemu-user.
2021.06.07 11:32:29.629269 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package i
nstallation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021.06.07 11:32:29.629538 [ 49 ] {} <Trace> BaseDaemon: Received signal 6
2021.06.07 11:32:29.629769 [ 79 ] {} <Fatal> BaseDaemon: ########################################
2021.06.07 11:32:29.629884 [ 79 ] {} <Fatal> BaseDaemon: (version 21.6.1.6734 (official build), build id: 2DF0BB0E45DEF2FAEE1C279774309D1CA1C1BC1C) (from thread 1) (no query) Received signal Aborted (6)
2021.06.07 11:32:29.629935 [ 79 ] {} <Fatal> BaseDaemon:
2021.06.07 11:32:29.629982 [ 79 ] {} <Fatal> BaseDaemon: Stack trace: 0xfffcd2557138
2021.06.07 11:32:29.630057 [ 79 ] {} <Fatal> BaseDaemon: 0. gsignal @ 0x37138 in /usr/lib/aarch64-linux-gnu/libc-2.31.so
2021.06.07 11:32:29.718354 [ 79 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 1CFD6734176B7CA8D26B342372E4DDE1. There is no information about the reference checksum.
2021.06.07 11:32:29.718503 [ 79 ] {} <Information> SentryWriter: Not sending crash report
2021.06.07 11:32:30.609114 [ 75 ] {} <Trace> SystemLog (system.crash_log): Flushing system log, 1 entries to flush up to offset 1
2021.06.07 11:32:30.609263 [ 75 ] {} <Debug> SystemLog (system.crash_log): Will use existing table system.crash_log for CrashLog
2021.06.07 11:32:30.609694 [ 75 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:32:30.610678 [ 75 ] {} <Trace> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Renaming temporary part tmp_insert_all_1_1_0 to all_7_7_0.
2021.06.07 11:32:30.610808 [ 75 ] {} <Trace> SystemLog (system.crash_log): Flushed system log up to offset 1
2021.06.07 11:32:37.109182 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush up to offset 8
2021.06.07 11:32:37.109817 [ 73 ] {} <Debug> SystemLog (system.metric_log): Will use existing table system.metric_log for MetricLog
2021.06.07 11:32:37.113417 [ 73 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:32:37.116674 [ 73 ] {} <Trace> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Renaming temporary part tmp_insert_202106_1_1_0 to 202106_16_16_0.
2021.06.07 11:32:37.117223 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushed system log up to offset 8
2021.06.07 11:32:44.617337 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush up to offset 16
2021.06.07 11:32:44.621164 [ 73 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:32:44.624326 [ 73 ] {} <Trace> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Renaming temporary part tmp_insert_202106_2_2_0 to 202106_17_17_0.
2021.06.07 11:32:44.624866 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushed system log up to offset 16
2021.06.07 11:32:49.629587 [ 49 ] {} <Trace> BaseDaemon: Received signal 5
2021.06.07 11:32:49.629777 [ 81 ] {} <Fatal> BaseDaemon: ########################################
2021.06.07 11:32:49.629882 [ 81 ] {} <Fatal> BaseDaemon: (version 21.6.1.6734 (official build), build id: 2DF0BB0E45DEF2FAEE1C279774309D1CA1C1BC1C) (from thread 1) (no query) Received signal Trace/breakpoint trap (5)
2021.06.07 11:32:49.629935 [ 81 ] {} <Fatal> BaseDaemon:
2021.06.07 11:32:49.629968 [ 81 ] {} <Fatal> BaseDaemon: Stack trace: 0xfffcd2543e40
2021.06.07 11:32:49.630047 [ 81 ] {} <Fatal> BaseDaemon: 0. abort @ 0x23e40 in /usr/lib/aarch64-linux-gnu/libc-2.31.so
2021.06.07 11:32:49.718446 [ 81 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 1CFD6734176B7CA8D26B342372E4DDE1. There is no information about the reference checksum.
2021.06.07 11:32:49.718595 [ 81 ] {} <Information> SentryWriter: Not sending crash report
2021.06.07 11:32:50.612026 [ 75 ] {} <Trace> SystemLog (system.crash_log): Flushing system log, 1 entries to flush up to offset 2
2021.06.07 11:32:50.612453 [ 75 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:32:50.613461 [ 75 ] {} <Trace> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Renaming temporary part tmp_insert_all_2_2_0 to all_8_8_0.
2021.06.07 11:32:50.613598 [ 75 ] {} <Trace> SystemLog (system.crash_log): Flushed system log up to offset 2
2021.06.07 11:32:52.124977 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 7 entries to flush up to offset 23
2021.06.07 11:32:52.128833 [ 73 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:32:52.131943 [ 73 ] {} <Trace> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Renaming temporary part tmp_insert_202106_3_3_0 to 202106_18_18_0.
2021.06.07 11:32:52.132476 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushed system log up to offset 23
2021.06.07 11:32:59.632597 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush up to offset 31
2021.06.07 11:32:59.636721 [ 73 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:32:59.639941 [ 73 ] {} <Trace> system.metric_log (c93267fc-5e19-4068-b82d-07e8a7b66ae3): Renaming temporary part tmp_insert_202106_4_4_0 to 202106_19_19_0.
2021.06.07 11:32:59.640481 [ 73 ] {} <Trace> SystemLog (system.metric_log): Flushed system log up to offset 31
2021.06.07 11:33:01.881676 [ 60 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda) (MergerMutator): Selected 5 parts from all_1_4_1 to all_8_8_0
2021.06.07 11:33:01.881753 [ 60 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 600.41 GiB.
2021.06.07 11:33:01.881854 [ 80 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda) (MergerMutator): Merging 5 parts: from all_1_4_1 to all_8_8_0 into Compact
2021.06.07 11:33:01.881992 [ 80 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda) (MergerMutator): Selected MergeAlgorithm: Horizontal
2021.06.07 11:33:01.882094 [ 80 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part all_1_4_1, total 4 rows starting from the beginning of the part
2021.06.07 11:33:01.882294 [ 80 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part all_5_5_0, total 1 rows starting from the beginning of the part
2021.06.07 11:33:01.882412 [ 80 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part all_6_6_0, total 1 rows starting from the beginning of the part
2021.06.07 11:33:01.882538 [ 80 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part all_7_7_0, total 1 rows starting from the beginning of the part
2021.06.07 11:33:01.882649 [ 80 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part all_8_8_0, total 1 rows starting from the beginning of the part
2021.06.07 11:33:01.883834 [ 80 ] {} <Debug> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda) (MergerMutator): Merge sorted 8 rows, containing 11 columns (11 merged, 0 gathered) in 0.001997499 sec., 4005.0082628326727 row
s/sec., 836.98 KiB/sec.
2021.06.07 11:33:01.884335 [ 80 ] {} <Trace> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda): Renaming temporary part tmp_merge_all_1_8_2 to all_1_8_2.
2021.06.07 11:33:01.884434 [ 80 ] {} <Trace> system.crash_log (06c15551-7860-45b6-b204-4b0f9c44ddda) (MergerMutator): Merged 5 parts: from all_1_4_1 to all_8_8_0
```
tks!@filimonov | https://github.com/ClickHouse/ClickHouse/issues/25044 | https://github.com/ClickHouse/ClickHouse/pull/25854 | 956b1f588dbf5e406bf61e8a9b4e329d35af8b70 | 011ed015fa49c8e1a37f6f103c28def5e637a23f | "2021-06-07T11:44:16Z" | c++ | "2021-06-30T13:18:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 25,026 | ["src/DataTypes/DataTypeMap.cpp", "tests/queries/0_stateless/01318_map_add_map_subtract_on_map_type.reference", "tests/queries/0_stateless/01550_type_map_formats.reference", "tests/queries/0_stateless/02002_parse_map_int_key.reference", "tests/queries/0_stateless/02002_parse_map_int_key.sql"] | Map does not support Integer key very well since 21.4 | Below test case worked before 21.4. It seems there's parsing issue with all Map columns using Integer as key.
```sql
set allow_experimental_map_type = 1;
CREATE TABLE IF NOT EXISTS test_maps(ma Map(Integer, Array(String)), mi Map(Integer, Integer), ms Map(String, String)) ENGINE = Memory;
-- Exception on client:
-- Code: 62. DB::Exception: Cannot parse expression of type Map(Int32,Array(String)) here: {1:['11','12'],2:['22','23']},{1:11,2:22},{'k1':'v1','k2':'v2'}): data for INSERT was parsed from query
insert into test_maps values ({1:['11','12'],2:['22','23']},{1:11,2:22},{'k1':'v1','k2':'v2'});
```
Just tried 21.6 and it's still the same:
```sql
:) create table if not exists test_map_with_int_key(m Map(UInt32, UInt32)) engine = Memory;
...
:) SELECT CAST(([1, 2], [11, 22]), 'Map(UInt32, UInt32)') AS m;
SELECT CAST(([1, 2], [11, 22]), 'Map(UInt32, UInt32)') AS m
Query id: dbe47185-bc24-488b-a85d-761616e3199d
┌─m───────────┐
│ {1:11,2:22} │
└─────────────┘
:) insert into test_map_with_int_key SELECT CAST(([1, 2], [11, 22]), 'Map(UInt32, UInt32)') as m
INSERT INTO test_map_with_int_key SELECT CAST(([1, 2], [11, 22]), 'Map(UInt32, UInt32)') AS m
Query id: 6877901f-1518-48f9-ba65-d54ed7326388
Ok.
:) insert into test_map_with_int_key values({1:11,2:22})
INSERT INTO test_map_with_in_key VALUES
Query id: 3f6cae42-064a-467f-baae-b7f323e910c0
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 21.6.3):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test_map_with_in_key doesn't exist.
```
P.S. I'll submit PRs for integration test this week so that we can spot this kind of issue early. | https://github.com/ClickHouse/ClickHouse/issues/25026 | https://github.com/ClickHouse/ClickHouse/pull/27146 | e4f85a5569cf4df5f54e57c2341f19e93d8a19ae | e9fe03df14ac768301fb9fbb668358035418977e | "2021-06-07T01:41:56Z" | c++ | "2021-08-15T10:37:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,988 | ["src/Interpreters/ExternalDictionariesLoader.cpp"] | 21.2+ slow dictGet for a hashed dictionary | After an upgrade from 20.9 to 21.5 the system's performance became horrible.
It looks related to dictGet calls.
We use dictionaries extensively, some queries call dictGet billions of times.
This query shows the timings for one type of queries during the upgrade day (max_execution_time is set to 900 seconds):
```
select ...
from cluster(segmented,system,query_log)
where event_date = '2021-06-02' and type>1
and normalizeQuery(query) like '?'
order by query_duration_ms desc
revision | query_duration_ms | read_rows
---------+-------------------+------------
54450 | 897083 | 3459117378
54450 | 897014 | 3364998164
54450 | 897012 | 3299700894
54450 | 896979 | 3152379204
54450 | 896979 | 2702215145
54450 | 896964 | 3177876106
54450 | 896962 | 3474400555
54450 | 896958 | 3110768107
54450 | 896958 | 3110768107
54450 | 896955 | 2431164845
54450 | 896951 | 3256483636
54450 | 896951 | 3095311086
54450 | 896943 | 2642380771
54450 | 896940 | 2806694280
54450 | 896920 | 2519595244
54450 | 896109 | 3415702637
54450 | 896098 | 4081416581
54450 | 896090 | 3568601556
54450 | 896061 | 4020248971
54450 | 896035 | 4126625535
54450 | 892479 | 1712068430
54450 | 892455 | 1952843231
54450 | 892431 | 1773988056
54450 | 892430 | 2192413020
54450 | 892347 | 2100158081
54450 | 892339 | 1756340381
54450 | 892326 | 1674227229
54450 | 892320 | 2391544140
54450 | 892315 | 2140854925
54450 | 892277 | 2528340153
54450 | 892277 | 2137937751
54450 | 892269 | 2282153280
54450 | 892239 | 1980261024
54450 | 892170 | 1609459314
54450 | 892170 | 1609459314
54450 | 850833 | 4674872143
54450 | 844513 | 4662846798
54450 | 835782 | 4678629857
54450 | 821421 | 4664510731
54450 | 811417 | 4664635983
54450 | 810258 | 4671813792
54450 | 808047 | 4666613441
54450 | 791205 | 4681643817
54450 | 785817 | 4674652841
54450 | 311279 | 1111580518
54439 | 20446 | 4608220540
54439 | 20161 | 4606741306
54439 | 19823 | 4601688152
54439 | 18877 | 4594803842
54439 | 18386 | 4623632801
54439 | 16791 | 4644514747
54439 | 16741 | 4645854329
54439 | 16380 | 4654062155
54439 | 16307 | 4644366032
54439 | 16051 | 4654656590
54439 | 16020 | 4653820788
54439 | 15978 | 4663026174
54439 | 15751 | 4672618090
54439 | 15740 | 4672743806
54439 | 15732 | 4657023203
54439 | 15301 | 4668494064
54439 | 14786 | 4689708177
54439 | 14633 | 4659128226
54439 | 14027 | 4717231446
54439 | 13920 | 4720637148
54439 | 13502 | 4706140498
54439 | 13403 | 4708245560
54439 | 13399 | 4709977568
54439 | 13298 | 4704405057
54439 | 13163 | 4716129000
54439 | 12804 | 4716505118
54439 | 12796 | 4725167340
```
The dictionary is wide and sparse:
```
length(attribute.names) | 131
type | Hashed
bytes_allocated | 223264064
element_count | 2239707
select count(), min(key), median(key), max(key) from dictionary
count() | min(key) | median(key) | max(key)
--------+----------+-------------+---------------------
17097 | 5019 | 230417 | 18446744073709551615
```
I noticed that the number if context switches per second was very high while 21.5 was running

I checked different versions, 21.1 is working fine, 21.2 is slow.
I was not able to reproduce this on synthetic data yet, but I'll continue to try. | https://github.com/ClickHouse/ClickHouse/issues/24988 | https://github.com/ClickHouse/ClickHouse/pull/25001 | f2eed22ebd4e865fce20c2566759d74e8458dfd5 | aaaf28d7efbd2870d06dc8f6008035aad513bf9e | "2021-06-04T22:15:45Z" | c++ | "2021-06-06T07:43:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,971 | ["src/Storages/StorageDistributed.cpp", "tests/queries/0_stateless/01155_rename_move_materialized_view.reference", "tests/queries/0_stateless/01155_rename_move_materialized_view.sql", "tests/queries/0_stateless/arcadia_skip_list.txt"] | Unable to move Distributed table from Ordinary database to Atomic | **Describe the bug**
Can't rename distributed table and move it from Ordinary to Atomic database
**How to reproduce**
```
CREATE DATABASE atomic_database ON CLUSTER cluster_name ENGINE = Atomic
```
```
CREATE DATABASE ordinary_database ON CLUSTER cluster_name ENGINE = Ordinary
```
```
CREATE TABLE ordinary_database.t ON CLUSTER cluster_name
...
ENGINE = ReplicatedMergeTree(...)
...
```
```
CREATE TABLE ordinary_database.distributed_table ON CLUSTER cluster_name AS ordinary_database.t
ENGINE = Distributed(cluster_name, ordinary_database, t)
```
```
RENAME TABLE ordinary_database.distributed_table TO atomic_database.distributed_table
Query id: 7bd2101e-f78a-4584-8374-6e44a0bbf23f
0 rows in set. Elapsed: 0.011 sec.
Received exception from server (version 21.5.6):
Code: 1000. DB::Exception: Received from clickhouse-server:9000. DB::Exception: File not found: /var/lib/clickhouse/data/ordinary_database/distributed_table.
```
But
```
ls -lh /var/lib/clickhouse/data/ordinary_database
total 0
drwxr-x--- 2 ... staff 64B Jun 4 18:19 distributed_table
drwxr-x--- 4 ... staff 128B Jun 4 18:18 t
```
**Does it reproduce on recent release?**
Checked on 21.5.6.6 version
**Expected behavior**
For example, renaming _Views_ works perfectly. I expect the same from the _Distributed_ table
| https://github.com/ClickHouse/ClickHouse/issues/24971 | https://github.com/ClickHouse/ClickHouse/pull/25667 | a04605fee37f75cf2c3f5be06be1003a70b00261 | a14724ca7b0101927c359581e6d4b05453dc0049 | "2021-06-04T15:28:22Z" | c++ | "2021-06-24T21:05:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,910 | ["src/Interpreters/QueryNormalizer.cpp", "src/Interpreters/QueryNormalizer.h", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/TreeRewriter.h", "src/Interpreters/tests/gtest_cycle_aliases.cpp", "src/Storages/ColumnsDescription.cpp", "tests/queries/0_stateless/01902_self_aliases_in_columns.reference", "tests/queries/0_stateless/01902_self_aliases_in_columns.sql"] | Self-referencing MATERIALIZED is not forbidden | ```
CREATE TABLE a
(
`number` UInt64,
`x` MATERIALIZED x
)
ENGINE = MergeTree
ORDER BY number;
-- that should fail!
insert into a values (1);
Received exception from server (version 21.7.1):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'x' while processing query: 'CAST(x, 'UInt8') AS x', required columns: 'x' 'x'.
``` | https://github.com/ClickHouse/ClickHouse/issues/24910 | https://github.com/ClickHouse/ClickHouse/pull/25059 | 8e88e682c194f6ee44abfff15e34b884ca1e9a2c | 185fb83587360e5fc2053b71af41aeda1ddd6ad9 | "2021-06-03T10:58:03Z" | c++ | "2021-06-08T20:36:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,901 | ["contrib/libunwind"] | ARM Oracle Cloud sigabort while start. | https://blogs.oracle.com/cloud-infrastructure/arm-based-cloud-computing-is-the-next-big-thing-introducing-arm-on-oci
**Does it reproduce on recent release?**
Yes, master
AArch64 from here https://clickhouse.tech/docs/en/getting-started/install/
**Enable crash reporting**
Done, but it crashing too early probably.
**Expected behavior**
Clickhouse started normally.
**Error message and/or stacktrace**
```
2021.06.02 23:35:03.080905 [ 36622 ] {} <Fatal> BaseDaemon: ########################################
2021.06.02 23:35:03.080930 [ 36622 ] {} <Fatal> BaseDaemon: (version 21.7.1.7031 (official build), build id: E1A16E06EFFA2C98D3DB5A7928262DD945FA4202) (from thread 36590) (no query) Received signal Aborted (6)
2021.06.02 23:35:03.081072 [ 36622 ] {} <Fatal> BaseDaemon:
2021.06.02 23:35:03.081138 [ 36622 ] {} <Fatal> BaseDaemon: Stack trace: 0xfffe430e2c6c
2021.06.02 23:35:03.081167 [ 36622 ] {} <Fatal> BaseDaemon: 0. __GI_raise @ 0x32c6c in /usr/lib64/libc-2.28.so
2021.06.02 23:35:03.136787 [ 36622 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 8966B74014E280E1F89E1B7D903ADD39. There is no information about the reference checksum.
2021.06.02 23:12:46.567141 [ 36026 ] {} <Fatal> BaseDaemon: ########################################
2021.06.02 23:12:46.567167 [ 36026 ] {} <Fatal> BaseDaemon: (version 21.7.1.7031 (official build), build id: E1A16E06EFFA2C98D3DB5A7928262DD945FA4202) (from thread 35995) (no query) Received signal Aborted (6)
2021.06.02 23:12:46.567176 [ 36026 ] {} <Fatal> BaseDaemon:
2021.06.02 23:12:46.567184 [ 36026 ] {} <Fatal> BaseDaemon: Stack trace: 0x727e9a8 0xc1cec10
2021.06.02 23:12:46.575052 [ 36026 ] {} <Fatal> BaseDaemon: 0.1. inlined from /build/build_docker/../src/Common/StackTrace.cpp:304: StackTrace::tryCapture()
2021.06.02 23:12:46.575074 [ 36026 ] {} <Fatal> BaseDaemon: 0. ../src/Common/StackTrace.cpp:270: StackTrace::StackTrace(ucontext_t const&) @ 0x727e9a8 in /usr/bin/clickhouse
2021.06.02 23:12:46.604001 [ 36026 ] {} <Fatal> BaseDaemon: 1.1. inlined from /build/build_docker/../src/Common/CurrentThread.h:78: DB::CurrentThread::getQueryId()
2021.06.02 23:12:46.604030 [ 36026 ] {} <Fatal> BaseDaemon: 1. ../base/daemon/BaseDaemon.cpp:139: signalHandler(int, siginfo_t*, void*) @ 0xc1cec10 in /usr/bin/clickhouse
2021.06.02 23:12:46.659585 [ 36026 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 8966B74014E280E1F89E1B7D903ADD39. There is no information about the reference checksum.
2021.06.02 23:26:18.716480 [ 36228 ] {} <Warning> Application: Calculated checksum of the binary: 8966B74014E280E1F89E1B7D903ADD39. There is no information about the reference checksum.
```
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/24901 | https://github.com/ClickHouse/ClickHouse/pull/25854 | 956b1f588dbf5e406bf61e8a9b4e329d35af8b70 | 011ed015fa49c8e1a37f6f103c28def5e637a23f | "2021-06-03T08:53:06Z" | c++ | "2021-06-30T13:18:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,892 | ["src/Interpreters/ExpressionAnalyzer.cpp", "tests/queries/0_stateless/02366_window_function_order_by.reference", "tests/queries/0_stateless/02366_window_function_order_by.sql"] | Window functions: Not found column number in block error when specifying ORDER BY that uses a column not mentioned in the SELECT | **Describe the unexpected behaviour**
Not found column number in block error when specifying ORDER BY that uses a column not mentioned in the SELECT.
**How to reproduce**
```
:) select count() over (order by number + 1) from numbers(10) order by number
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column number in block. There are only columns: count() OVER (ORDER BY number + 1 ASC), plus(number, 1)
```
**Expected behavior**
It should work.
| https://github.com/ClickHouse/ClickHouse/issues/24892 | https://github.com/ClickHouse/ClickHouse/pull/39354 | 6a09340f1159f3e9a96dc4c01de4e7607b2173c6 | b01f5bdca8ed6e71b551234d94b861be299ceabc | "2021-06-02T22:08:08Z" | c++ | "2022-09-19T06:22:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,869 | ["src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp"] | The function ReplicatedMergeTreeQueue::removePartProducingOpsInRange removed a LogEntry by mistake | You have to provide the following information whenever possible.
**Describe the bug**
Case :Two clickhouse instances A and B,they are two replicas of the same shard, quickly perform many "alter table replace partition" operations on the same table on A, and finally the data in the two instances A and B are inconsistent.
The code version is 20.8.12.2
I debugged the code and found that this function “ReplicatedMergeTreeQueue::removePartProducingOpsInRange” deleted LogEntry by mistake
This line of code "checkReplaceRangeCanBeRemoved(part_info, *it, current)"
For example, if variable "current" is "queue-0000000002", the content is
format version: 4
create_time: 2021-06-02 15:01:51
source replica: replica01
block_id:
REPLACE_RANGE
drop_range_name: 68a6cbc22e808d5c3665a492584ec053_0_0_4294967295
from_database: erp_pa_dw_stream
from_table: dim_dept_info_d_whole_cdy1_4762f914c8ded772e6e742f43565e5d4_20210528_tmp_local
source_parts: []
new_parts: []
part_checksums: []
columns_version: -1
variable "*it" is "queue-0000000003", the content is
format version: 4
create_time: 2021-06-02 15:01:51
source replica: replica01
block_id:
REPLACE_RANGE
drop_range_name: cc04c226c1f9ac46fa63e73d7f6ec8f9_0_0_4294967295
from_database: erp_pa_dw_stream
from_table: dim_dept_info_d_whole_cdy1_358f28df0a09f4176940da62e12bdf63_20210528_tmp_local
source_parts: []
new_parts: []
part_checksums: []
columns_version: -1
"checkReplaceRangeCanBeRemoved" will return true, queue-0000000003 will be removed and not be executed.
queue-0000000002 and queue-0000000003 delete different partitions
**Does it reproduce on recent release?**
I'm not sure if the latest release has this problem, but the logic of this code has not changed
| https://github.com/ClickHouse/ClickHouse/issues/24869 | https://github.com/ClickHouse/ClickHouse/pull/25665 | d138ba79c7d5663c722143a4368127a5ece71c09 | f6afc381be331d36805fdcaaba7e4bda1fe25198 | "2021-06-02T07:59:33Z" | c++ | "2021-06-25T08:11:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,780 | ["src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "tests/queries/0_stateless/01710_force_use_projection.reference", "tests/queries/0_stateless/01710_force_use_projection.sql"] | false exception "No projection is used" when all parts are eliminated (pruned) | ```sql
DROP TABLE IF EXISTS tp;
create table tp (d1 Int32, d2 Int32, eventcnt Int64,
projection p (select sum(eventcnt), d1 group by d1))
engine = SummingMergeTree order by (d1, d2);
set allow_experimental_projection_optimization = 1, force_optimize_projection = 1;
select sum(eventcnt) eventcnt, d1
from tp
group by d1;
DB::Exception: No projection is used when allow_experimental_projection_optimization = 1.
```
The exception message is also incorrect, should be "when force_optimize_projection = 1".
| https://github.com/ClickHouse/ClickHouse/issues/24780 | https://github.com/ClickHouse/ClickHouse/pull/24782 | 3dd7d252a955bcf6524cc3998195f7fec91ea2ad | d4cbce3761aa70528677d22d874d75755b7c53bd | "2021-05-31T01:37:26Z" | c++ | "2021-06-01T08:36:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,730 | ["src/Interpreters/MutationsInterpreter.cpp", "src/Interpreters/MutationsInterpreter.h", "src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp", "src/Storages/MergeTree/MergeTreeDataMergerMutator.h", "tests/queries/0_stateless/01923_ttl_with_modify_column.reference", "tests/queries/0_stateless/01923_ttl_with_modify_column.sql"] | Alter of a column participating in TTL expression can leave a table in an inconsistent state | version 21.5.5.12 (official build)
I'm trying to change data type for a column that is used in the TTL expression.
It works well when the table is empty.
When the table is not empty, the mutation fails.
As a result the table ends up in an inconsistent state.
```
create table test
(
InsertionDateTime DateTime,
TTLDays Int32 DEFAULT CAST(365, 'Int32')
)
Engine=MergeTree()
order by tuple()
TTL InsertionDateTime + toIntervalDay(TTLDays);
Ok.
insert into test values (now(), 23);
Ok.
ALTER TABLE test modify column TTLDays Int16 DEFAULT CAST(365, 'Int16');
Code: 341, e.displayText() = DB::Exception: Exception happened during execution of mutation 'mutation_2.txt' with part 'all_1_1_0' reason: 'Code: 10, e.displayText() = DB::Exception: Not found column InsertionDateTime in block. There are only columns: TTLDays (version 21.5.5.12 (official build))'. This error maybe retryable or not. In case of unretryable error, mutation can be killed with KILL MUTATION query (version 21.5.5.12 (official build)) [DB Errorcode=341]
insert into test values (now(), 23);
Ok.
select table, column, type, sum(rows) rows, sum(column_bytes_on_disk) on_disk
from system.parts_columns where column like 'TTLDays'
group by database, table, column, type
table | column | type | rows | on_disk
------+---------+-------+------+--------
test | TTLDays | Int32 | 1 | 86
test | TTLDays | Int16 | 1 | 84
``` | https://github.com/ClickHouse/ClickHouse/issues/24730 | https://github.com/ClickHouse/ClickHouse/pull/25554 | 7b4a56977dc0de8fef8dee7151a0ad0086b4ff60 | 1dda771864ec61691af7ee67fa6b8bf24360727a | "2021-05-28T16:05:24Z" | c++ | "2021-07-03T15:26:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,461 | ["src/AggregateFunctions/AggregateFunctionUniq.cpp", "src/AggregateFunctions/AggregateFunctionUniqCombined.cpp", "src/AggregateFunctions/AggregateFunctionUniqUpTo.cpp", "tests/queries/0_stateless/01882_uniqueState_over_uniqueState.reference", "tests/queries/0_stateless/01882_uniqueState_over_uniqueState.sh"] | Crash when chaining different uniq*State | **Describe the bug**
Chaining **different** uniqXXXXState calls (uniqState, uniqExactState, uniqThetaState, uniqHLL12State or uniqCombinedState) crashes the server.
**Does it reproduce on recent release?**
Yes. I've tested 20.7 and current master/HEAD.
**How to reproduce**
Example with a query:
```sql
SELECT
id,
uniqState(s)
FROM
(
SELECT
number % 10 as id,
uniqExactState(number) as s
FROM
(
SELECT number
FROM system.numbers
LIMIT 1000
)
GROUP BY number
)
GROUP BY id
```
The other way around (uniqState -> uniqExactState) also crashes, as well as other combinations with other uniq*State.
Same can be done using a table:
```sql
DROP TABLE IF EXISTS uniq_crash_table;
CREATE TABLE uniq_crash_table
(
`id` Int64,
`hits` AggregateFunction(uniq, UInt64)
)
ENGINE = AggregatingMergeTree
PARTITION BY id
ORDER BY id AS
SELECT
number % 10 AS id,
uniqState(number) AS hits
FROM
(
SELECT number
FROM system.numbers
LIMIT 1000
)
GROUP BY number
SELECT
id,
uniqExactState(hits)
FROM uniq_crash_table
GROUP BY id
```
**Expected behavior**
The server shouldn't crash. If the SQL is invalid then an exception should be raised instead.
**Error message and/or stacktrace**
Different queries crash in different parts, for example when running the first query from above you get an error like this in the clickhouse-client:
```
Exception on client:
Code: 173. DB::ErrnoException: Allocator: Cannot mmap 1.00 TiB., errno: 12, strerror: Cannot allocate memory: while receiving packet from localhost:9000
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.7.1 revision 54448.
```
And the coredump looks like this:
```
(gdb) bt
#0 0x00007f2685cacd22 in raise () from /usr/lib/libc.so.6
#1 0x00007f2685c96862 in abort () from /usr/lib/libc.so.6
#2 0x00007f2687701fd4 in terminate_handler () at ../base/daemon/BaseDaemon.cpp:432
#3 0x00007f2685e7d888 in std::__terminate (func=0x2) at ../contrib/libcxxabi/src/cxa_handlers.cpp:59
#4 0x00007f2685e7d7ea in std::terminate () at ../contrib/libcxxabi/src/cxa_handlers.cpp:88
#5 0x00007f2683ee213b in __clang_call_terminate () from /mnt/ch/ClickHouse/build/src/AggregateFunctions/libclickhouse_aggregate_functions.so
#6 0x00007f2685579690 in HashTable<unsigned long, HashTableCell<unsigned long, HashCRC32<unsigned long>, HashTableNoState>, HashCRC32<unsigned long>, HashTableGrower<4ul>, AllocatorWithStackMemory<Allocator<true, true>, 128ul, 1ul> >::~HashTable (
this=0x7f258483f0a0) at ../src/Common/HashTable/HashTable.h:703
#7 DB::AggregateFunctionUniqExactData<unsigned long>::~AggregateFunctionUniqExactData (this=0x7f258483f0a0) at ../src/AggregateFunctions/AggregateFunctionUniq.h:93
#8 DB::IAggregateFunctionDataHelper<DB::AggregateFunctionUniqExactData<unsigned long>, DB::AggregateFunctionUniq<unsigned long, DB::AggregateFunctionUniqExactData<unsigned long> > >::destroy (this=<optimized out>, place=0x7f258483f0a0 "")
at ../src/AggregateFunctions/IAggregateFunction.h:443
#9 0x00007f267960cdbc in DB::ColumnAggregateFunction::~ColumnAggregateFunction (this=0x7f2566fbc980) at ../src/Columns/ColumnAggregateFunction.cpp:80
#10 0x00007f267960cf8e in DB::ColumnAggregateFunction::~ColumnAggregateFunction (this=0x7f2566fbc980) at ../src/Columns/ColumnAggregateFunction.cpp:77
#11 0x00007f2687eb8c32 in boost::sp_adl_block::intrusive_ptr_release<DB::IColumn, boost::sp_adl_block::thread_safe_counter> (p=<optimized out>) at ../contrib/boost/boost/smart_ptr/intrusive_ref_counter.hpp:173
#12 boost::intrusive_ptr<DB::IColumn const>::~intrusive_ptr (this=0x7f25831a4770) at ../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:98
#13 DB::ColumnWithTypeAndName::~ColumnWithTypeAndName (this=0x7f25831a4770) at ../src/Core/ColumnWithTypeAndName.h:19
#14 std::__1::allocator<DB::ColumnWithTypeAndName>::destroy (this=0x7f258edf0a50, __p=0x7f25831a4770) at ../contrib/libcxx/include/memory:891
#15 std::__1::allocator_traits<std::__1::allocator<DB::ColumnWithTypeAndName> >::__destroy<DB::ColumnWithTypeAndName> (__a=..., __p=0x7f25831a4770) at ../contrib/libcxx/include/__memory/allocator_traits.h:539
#16 std::__1::allocator_traits<std::__1::allocator<DB::ColumnWithTypeAndName> >::destroy<DB::ColumnWithTypeAndName> (__a=..., __p=0x7f25831a4770) at ../contrib/libcxx/include/__memory/allocator_traits.h:487
#17 std::__1::__vector_base<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >::__destruct_at_end (this=0x7f258edf0a40, __new_last=0x7f25831a4740) at ../contrib/libcxx/include/vector:428
#18 std::__1::__vector_base<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >::clear (this=0x7f258edf0a40) at ../contrib/libcxx/include/vector:371
#19 std::__1::__vector_base<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >::~__vector_base (this=0x7f258edf0a40) at ../contrib/libcxx/include/vector:465
#20 std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >::~vector (this=0x7f258edf0a40) at ../contrib/libcxx/include/vector:557
#21 DB::Block::~Block (this=0x7f258edf0a40) at ../src/Core/Block.h:25
#22 0x00007f26783e29b9 in DB::TCPHandler::processOrdinaryQueryWithProcessors (this=<optimized out>, this@entry=0x7f2583167500) at ../src/Server/TCPHandler.cpp:722
#23 0x00007f26783dd487 in DB::TCPHandler::runImpl (this=0x7f2583167500) at ../src/Server/TCPHandler.cpp:331
#24 0x00007f26783e90a9 in DB::TCPHandler::run (this=0x7f2583167500) at ../src/Server/TCPHandler.cpp:1621
#25 0x00007f26869b110c in Poco::Net::TCPServerConnection::start (this=0x2) at ../contrib/poco/Net/src/TCPServerConnection.cpp:43
#26 0x00007f26869b1647 in Poco::Net::TCPServerDispatcher::run (this=0x7f2583d8d800) at ../contrib/poco/Net/src/TCPServerDispatcher.cpp:115
#27 0x00007f268668ddaa in Poco::PooledThread::run (this=0x7f266f769380) at ../contrib/poco/Foundation/src/ThreadPool.cpp:199
#28 0x00007f268668b660 in Poco::ThreadImpl::runnableEntry (pThread=0x7f266f7693b8) at ../contrib/poco/Foundation/src/Thread_POSIX.cpp:345
#29 0x00007f2685f9c259 in start_thread () from /usr/lib/libpthread.so.0
#30 0x00007f2685d6e5e3 in clone () from /usr/lib/libc.so.6
```
Any pointers on where to look to fix this myself are also appreciated. | https://github.com/ClickHouse/ClickHouse/issues/24461 | https://github.com/ClickHouse/ClickHouse/pull/24523 | f1733a93e5072e1e14db47584974a484304faa4d | af32228e9f3cedd4a1cabe97c6c8df23f4f05ba1 | "2021-05-24T18:02:55Z" | c++ | "2021-06-04T14:26:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,436 | ["src/Interpreters/ActionsDAG.cpp", "src/Processors/QueryPlan/UnionStep.cpp", "tests/queries/0_stateless/01881_union_header_mismatch_bug.reference", "tests/queries/0_stateless/01881_union_header_mismatch_bug.sql"] | UNION with constants + WHERE: query fails with 21.5.5.12 - worked with 21.4.7.3 | You have to provide the following information whenever possible.
**Describe the bug**
This query fails with the latest stable version 21.5.5.12. It worked with the last stable version 21.4.7.3.
**Does it reproduce on recent release?**
yes
**How to reproduce**
21.5.5.12
```
SELECT
label,
number
FROM
(
SELECT
'a' AS label,
number
FROM
(
SELECT number
FROM numbers(10)
)
UNION ALL
SELECT
'b' AS label,
number
FROM
(
SELECT number
FROM numbers(10)
)
)
WHERE number IN
(
SELECT number
FROM numbers(5)
)
```
**Expected behavior**
Query should run without errors
**Error message and/or stacktrace**
```
Received exception from server (version 21.5.5):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different names of columns:
label String String(size = 0), number UInt64 UInt64(size = 0), 'b' String Const(size = 0, String(size = 1))
label String String(size = 0), number UInt64 UInt64(size = 0), 'a' String Const(size = 0, String(size = 1)).
```
| https://github.com/ClickHouse/ClickHouse/issues/24436 | https://github.com/ClickHouse/ClickHouse/pull/24463 | ea97eee326a94c9a23d1b4349f660a17ad29b551 | d4998909b666e7129ca14221e6546b457086e16f | "2021-05-24T05:37:33Z" | c++ | "2021-05-25T09:04:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,418 | ["docs/ru/sql-reference/dictionaries/index.md"] | Bad link to external disctionary functions (russian edition) | **Describe the issue**
On page https://clickhouse.tech/docs/ru/sql-reference/dictionaries/
link to dictionary functions ("Подключаемые (внешние) словари с **набором функций**.")
Link "набором функций" is https://clickhouse.tech/docs/ru/sql-reference/dictionaries/external-dictionaries/
But should be https://clickhouse.tech/docs/ru/sql-reference/functions/ext-dict-functions/
| https://github.com/ClickHouse/ClickHouse/issues/24418 | https://github.com/ClickHouse/ClickHouse/pull/24429 | e7f77f27bf3245bc1c0493d423a7c18eb81344e7 | d45edc1f5d1a3c55f91eca693331439ee16a0f12 | "2021-05-22T15:19:28Z" | c++ | "2021-05-23T14:38:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,373 | ["CMakeLists.txt", "cmake/git_status.cmake"] | If git is present during build, print sha and status. | It will allow to easily figure out if the user is building unmodified ClickHouse source code. | https://github.com/ClickHouse/ClickHouse/issues/24373 | https://github.com/ClickHouse/ClickHouse/pull/28047 | 23a2ce20195b8a8758a77290a08dea3b7cb4d8ac | 19f67fefb24452acc0224913212bec6ad05a57f3 | "2021-05-21T02:26:21Z" | c++ | "2021-08-24T17:09:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,301 | ["src/Interpreters/ReplaceQueryParameterVisitor.cpp", "tests/queries/0_stateless/02377_extend_protocol_with_query_parameters.reference", "tests/queries/0_stateless/02377_extend_protocol_with_query_parameters.sh"] | Parameters do not work on DESCRIBE | **Describe the bug**
I am working with parameterized queries and they work fine for me so far for updates and selects.
The behaviour does though seem to break, when `DESCRIBE` is involved.
**Does it reproduce on recent release?**
Version used: 21.4.6.55 (server) -- yes
**How to reproduce**
* Which ClickHouse server version to use
* Version used: 21.4.6.55 (server)
* Which interface to use, if matters
* CLI and HTTP
* Non-default settings, if any
* no
When using the CLI on a simple `SELECT` example, the parameter work fine:
```sh
$ clickhouse client --param_p0="2" -q "SELECT {p0:String}"
2
```
When using it in a `DESCRIBE` part it does break though:
```sh
$ clickhouse client --param_p0="2" -q "DESCRIBE TABLE (SELECT {p0:String})"
Received exception from server (version 21.4.6):
Code: 456. DB::Exception: Received from localhost:9000. DB::Exception: Query parameter `p0` was not set.
```
**Expected behavior**
It would be expected to return to correct types like:
```sh
$ clickhouse client -q "DESCRIBE TABLE (SELECT '2')"
\'2\' String
```
**Error message and/or stacktrace**
As written above the following message appears:
```
Received exception from server (version 21.4.6):
Code: 456. DB::Exception: Received from localhost:9000. DB::Exception: Query parameter `p0` was not set.
```
**Additional context**
Possibly related to #10976
| https://github.com/ClickHouse/ClickHouse/issues/24301 | https://github.com/ClickHouse/ClickHouse/pull/40952 | 3da9e8646d118fff9e0c573cbb4b19bc08ff16fd | 16af4aebc815d5af439ff0da3b960d810401b790 | "2021-05-19T16:54:55Z" | c++ | "2022-09-04T14:26:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,293 | ["src/DataTypes/DataTypeTuple.cpp", "tests/queries/0_stateless/01881_create_as_tuple.reference", "tests/queries/0_stateless/01881_create_as_tuple.sql"] | If using Array of Tuple column in MergeTree table, the data cannot be read after restart | **Describe the bug**
I have a table created from `SELECT` query. The table has two columns: `UInt64` primary key and an array of tuples `Array(Tuple(String, UInt64))`. After restart of the ClickHouse server the data in the second column is lost (`SELECT` query returns only empty strings and zeros).
**Does it reproduce on recent release?**
Yes
**How to reproduce**
1. Create a table
```
CREATE TABLE Test ENGINE = MergeTree()
ORDER BY number AS
SELECT number,[('string',number)] as array from numbers(1,1000000)
```
2. Select the data
```
SELECT * FROM Test LIMIT 5
```
```
┌─number─┬─array──────────┐
│ 1 │ [('string',1)] │
│ 2 │ [('string',2)] │
│ 3 │ [('string',3)] │
│ 4 │ [('string',4)] │
│ 5 │ [('string',5)] │
└────────┴────────────────┘
```
(everything is fine)
3. Restart ClickHouse server (`sudo systemctl restart clickhouse-server`)
4. Select the data again
```
SELECT * FROM Test LIMIT 5
```
```
┌─number─┬─array────┐
│ 1 │ [('',0)] │
│ 2 │ [('',0)] │
│ 3 │ [('',0)] │
│ 4 │ [('',0)] │
│ 5 │ [('',0)] │
└────────┴──────────┘
```
The data in the `array` column is lost.
**Additional context**
In order to reproduce this behavior, the number of rows in the `Test` table should be sufficiently large (> 300000 rows on my machine).
| https://github.com/ClickHouse/ClickHouse/issues/24293 | https://github.com/ClickHouse/ClickHouse/pull/24464 | 151988e11a891b3a739a483aaf944d1b5ea3462a | 1317638e3bd73dd0fab07c0f485e58b4a490741c | "2021-05-19T14:52:49Z" | c++ | "2021-05-27T22:46:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,292 | ["src/Interpreters/QueryNormalizer.cpp", "src/Interpreters/QueryNormalizer.h", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/TreeRewriter.h", "src/Interpreters/tests/gtest_cycle_aliases.cpp", "src/Storages/ColumnsDescription.cpp", "tests/queries/0_stateless/01902_self_aliases_in_columns.reference", "tests/queries/0_stateless/01902_self_aliases_in_columns.sql"] | Self referencing ALIAS definition triggers a segfault when queried | A table definition with an `ALIAS` that references itself is accepted without any warning or error. When querying the aliased column clickhouse-server segfaults.
It occurred for me on 21.4.6.55.
**How to reproduce**
* Use Version 21.4.6.55
* Run these statements:
```sql
CREATE TABLE foo (i Int32, j ALIAS j+1) ENGINE = MergeTree() ORDER BY i;
INSERT INTO foo VALUES (23);
SELECT * FROM foo
┌──i─┐
│ 23 │
└────┘
SELECT * FROM foo WHERE j = 42
[00fc17cc0543] 2021.05.19 16:41:42.120886 [ 168 ] <Fatal> BaseDaemon: ########################################
[00fc17cc0543] 2021.05.19 16:41:42.121084 [ 168 ] <Fatal> BaseDaemon: (version 21.4.6.55 (official build), build id: CFD133A618715680C43AA5458CC9782F35C3B1E7) (from thread 101) (query_id: 398eee86-f3d6-446c-8175-000db227cc49) Received signal Segmentation fault (11)
[00fc17cc0543] 2021.05.19 16:41:42.121193 [ 168 ] <Fatal> BaseDaemon: Address: 0x7f98acbffff8 Access: write. Attempted access has violated the permissions assigned to the memory area.
[00fc17cc0543] 2021.05.19 16:41:42.121322 [ 168 ] <Fatal> BaseDaemon: Stack trace: 0xf043c5b 0xf042ff7 0xf042c3f 0xf2b5c9f 0xf2b5a31 0xf71b253 0xf754dd2 0xf7542b8 0xf754313 0xf754313 0xf754313 0xf754313 0xf754eae 0xf7542b8 0xf754313 0xf754313 0xf754313 0xf754313 0xf754eae 0xf7542b8 0xf754313 0xf754313 0xf754313 0xf754313 0xf754eae 0xf7542b8 0xf754313 0xf754313 0xf754313 0xf754313 0xf754eae
[00fc17cc0543] 2021.05.19 16:41:42.121488 [ 168 ] <Fatal> BaseDaemon: 1. DB::ActionsDAG::addNode(DB::ActionsDAG::Node) @ 0xf043c5b in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.121594 [ 168 ] <Fatal> BaseDaemon: 2. DB::ActionsDAG::addInput(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::IDataType const>) @ 0xf042ff7 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.121666 [ 168 ] <Fatal> BaseDaemon: 3. DB::ActionsDAG::ActionsDAG(DB::NamesAndTypesList const&) @ 0xf042c3f in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.121770 [ 168 ] <Fatal> BaseDaemon: 4. DB::ExpressionAnalyzer::analyzeAggregation() @ 0xf2b5c9f in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.121894 [ 168 ] <Fatal> BaseDaemon: 5. DB::ExpressionAnalyzer::ExpressionAnalyzer(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::TreeRewriterResult const> const&, DB::Context const&, unsigned long, bool, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >) @ 0xf2b5a31 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122024 [ 168 ] <Fatal> BaseDaemon: 6. DB::addTypeConversionToAST(std::__1::shared_ptr<DB::IAST>&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::NamesAndTypesList const&, DB::Context const&) @ 0xf71b253 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122131 [ 168 ] <Fatal> BaseDaemon: 7. DB::ColumnAliasesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::ColumnAliasesMatcher::Data&) @ 0xf754dd2 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122212 [ 168 ] <Fatal> BaseDaemon: 8. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf7542b8 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122309 [ 168 ] <Fatal> BaseDaemon: 9. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122403 [ 168 ] <Fatal> BaseDaemon: 10. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122503 [ 168 ] <Fatal> BaseDaemon: 11. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122611 [ 168 ] <Fatal> BaseDaemon: 12. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122682 [ 168 ] <Fatal> BaseDaemon: 13. DB::ColumnAliasesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::ColumnAliasesMatcher::Data&) @ 0xf754eae in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122778 [ 168 ] <Fatal> BaseDaemon: 14. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf7542b8 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122877 [ 168 ] <Fatal> BaseDaemon: 15. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.122982 [ 168 ] <Fatal> BaseDaemon: 16. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123071 [ 168 ] <Fatal> BaseDaemon: 17. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123142 [ 168 ] <Fatal> BaseDaemon: 18. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123238 [ 168 ] <Fatal> BaseDaemon: 19. DB::ColumnAliasesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::ColumnAliasesMatcher::Data&) @ 0xf754eae in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123342 [ 168 ] <Fatal> BaseDaemon: 20. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf7542b8 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123434 [ 168 ] <Fatal> BaseDaemon: 21. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123522 [ 168 ] <Fatal> BaseDaemon: 22. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123600 [ 168 ] <Fatal> BaseDaemon: 23. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123700 [ 168 ] <Fatal> BaseDaemon: 24. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123789 [ 168 ] <Fatal> BaseDaemon: 25. DB::ColumnAliasesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::ColumnAliasesMatcher::Data&) @ 0xf754eae in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123884 [ 168 ] <Fatal> BaseDaemon: 26. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf7542b8 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.123978 [ 168 ] <Fatal> BaseDaemon: 27. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.124049 [ 168 ] <Fatal> BaseDaemon: 28. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.124139 [ 168 ] <Fatal> BaseDaemon: 29. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.124238 [ 168 ] <Fatal> BaseDaemon: 30. DB::InDepthNodeVisitor<DB::ColumnAliasesMatcher, false, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xf754313 in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.124337 [ 168 ] <Fatal> BaseDaemon: 31. DB::ColumnAliasesMatcher::visit(DB::ASTIdentifier&, std::__1::shared_ptr<DB::IAST>&, DB::ColumnAliasesMatcher::Data&) @ 0xf754eae in /usr/bin/clickhouse
[00fc17cc0543] 2021.05.19 16:41:42.282099 [ 168 ] <Fatal> BaseDaemon: Checksum of the binary: 34D21D52DBEF5EEECE3EA692751B041B, integrity check passed.
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000)
```
**Expected behavior**
I'd expect either the table definition to be invalid or the query returning an error.
I'd expect the server to not segfault.
| https://github.com/ClickHouse/ClickHouse/issues/24292 | https://github.com/ClickHouse/ClickHouse/pull/25059 | 8e88e682c194f6ee44abfff15e34b884ca1e9a2c | 185fb83587360e5fc2053b71af41aeda1ddd6ad9 | "2021-05-19T14:50:42Z" | c++ | "2021-06-08T20:36:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,291 | ["src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp", "src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h", "tests/queries/0_stateless/01888_bloom_filter_hasAny.reference", "tests/queries/0_stateless/01888_bloom_filter_hasAny.sql"] | bloom_filter does not support hasAny | 21.5.1.6605
```sql
create table bftest (k Int64, x Array(Int64), index ix1(x) TYPE bloom_filter GRANULARITY 3)
Engine=MergeTree order by k;
insert into bftest select number,
arrayMap(i->rand64()%565656, range(10)) from numbers(10000000);
insert into bftest select number,
arrayMap(i->rand64()%565656, range(10)) from numbers(100000000);
### hasAny(x, [42,-42])
select count() from bftest where hasAny(x, [42,-42]);
Elapsed: 0.909 sec. Processed 110.00 million rows
### has (x, 42) or has(x, -42)
select count() from bftest where has (x, 42) or has(x, -42);
Elapsed: 0.143 sec. Processed 6.55 million rows
``` | https://github.com/ClickHouse/ClickHouse/issues/24291 | https://github.com/ClickHouse/ClickHouse/pull/24900 | 2a6b4a64f4eed54d41f70542dd2618ef0370f583 | eb7b491da97982e3fd5a050ba22b2052d1d538ba | "2021-05-19T14:49:01Z" | c++ | "2021-06-11T21:58:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,274 | ["src/Storages/StorageMemory.cpp", "tests/queries/0_stateless/01867_fix_storage_memory_mutation.reference", "tests/queries/0_stateless/01867_fix_storage_memory_mutation.sql"] | Mutation of StorageMemory with more than one block will fail when max_threads > 1 | How to reproduce:
```sql
VM-16-2-centos :) create table mem_test(a Int64, b Int64) engine = Memory
CREATE TABLE mem_test
(
`a` Int64,
`b` Int64
)
ENGINE = Memory
Query id: fb8d3ab7-6e69-410d-b5e3-9891c8604a9c
Ok.
0 rows in set. Elapsed: 0.004 sec.
VM-16-2-centos :) set max_block_size = 3
SET max_block_size = 3
Query id: 6803f260-4893-4f0b-91cf-fc5d5670ce7c
Ok.
0 rows in set. Elapsed: 0.001 sec.
VM-16-2-centos :) insert into mem_test select number, number from numbers(100)
INSERT INTO mem_test SELECT
number,
number
FROM numbers(100)
Query id: 19f11e21-4b76-4015-ac05-99732facee6a
Ok.
0 rows in set. Elapsed: 0.002 sec.
VM-16-2-centos :) alter table mem_test update a = 0 where b = 99
ALTER TABLE mem_test
UPDATE a = 0 WHERE b = 99
Query id: a9d72960-7c86-4195-b739-7148fea99c10
Ok.
0 rows in set. Elapsed: 0.001 sec.
VM-16-2-centos :) select * from mem_test format Null
SELECT *
FROM mem_test
FORMAT Null
Query id: 413f4d7e-cca9-4523-aedb-f9673bc515cb
0 rows in set. Elapsed: 0.011 sec.
Received exception from server (version 21.6.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column Int64 position 1: expected 1, got 3: While executing Memory.
``` | https://github.com/ClickHouse/ClickHouse/issues/24274 | https://github.com/ClickHouse/ClickHouse/pull/24275 | 976ccc2e908ac3bc28f763bfea8134ea0a121b40 | b9c3601083b58bcd98c27c98244c7182a22ac08a | "2021-05-19T08:49:09Z" | c++ | "2021-05-20T07:09:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,264 | ["docs/tools/website.py", "website/css/blog.css", "website/js/embedd.min.js", "website/templates/blog/content.html"] | Our website is using "embedd comments" JavaScript library that has a bug. | See https://github.com/tgallant/embedd/issues/147 | https://github.com/ClickHouse/ClickHouse/issues/24264 | https://github.com/ClickHouse/ClickHouse/pull/24265 | 8b13e73ac243adc96f8d64fb156e601907ddb9d5 | 2963baedc2e4566de280197cc05c4f878eea5313 | "2021-05-18T23:22:21Z" | c++ | "2021-05-19T02:10:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,209 | ["src/Interpreters/HashJoin.cpp", "tests/performance/dict_join.xml"] | Join on external dictionary works slow with grouping | Select query with aggregation and join on external dictionary works slow when no cast in join expression provided. Table and the dictionary both have UInt64 id's. When using explicit cast like `table.dict_id = toUInt64(dict.id)` the query works fast.
**Does it reproduce on recent release?**
Yes
**How to reproduce**
* ClickHouse server version to use: 21.4.6.55
```
create table test (key UInt64, value String) engine=MergeTree order by key;
insert into test select number, '' from numbers(1000000);
create dictionary test_dict (key UInt64, value String)
primary key key source(clickhouse(table test db 'default' user 'default'))
lifetime(min 0 max 0) layout(hashed());
-- Select without GROUP BY
-- 1.1
select test.key, test_dict.value from test join test_dict on test.key = test_dict.key limit 1;
-- Elapsed: 0.038 sec.
-- 1.2
select test.key, test_dict.value from test join test_dict on test.key = toUInt64(test_dict.key) limit 1;
-- Elapsed: 0.316 sec. Processed 1.00 million rows, 17.00 MB (3.17 million rows/s., 53.86 MB/s.)
-- 1.3
select test.key, dictGetString('test_dict', 'value', test.key) from test limit 1;
-- Elapsed: 0.007 sec.
-- Select with GROUP BY
-- 2.1
select test.key from test join test_dict on test.key = test_dict.key group by test.key limit 1;
-- Elapsed: 1286.466 sec. Processed 1.00 million rows, 8.00 MB (777.32 rows/s., 6.22 KB/s.)
-- 2.2
select test.key from test join test_dict on test.key = toUInt64(test_dict.key) group by test.key limit 1;
-- Elapsed: 0.169 sec. Processed 2.00 million rows, 16.00 MB (11.81 million rows/s., 94.49 MB/s.)
-- 2.3
select test.key, dictGetString('test_dict', 'value', test.key) from test group by test.key limit 1;
-- Elapsed: 0.056 sec. Processed 1.00 million rows, 8.00 MB (17.91 million rows/s., 143.32 MB/s.)
```
**Expected behavior**
Both queries with explicit cast in join and without it expected to work fast.
| https://github.com/ClickHouse/ClickHouse/issues/24209 | https://github.com/ClickHouse/ClickHouse/pull/25618 | 2f38b25ba6cc6c1b3de249246506e266bba4fb6c | 1e9e073b0a7beee76f1eb447086e60f4f0c091b8 | "2021-05-17T19:32:19Z" | c++ | "2021-06-30T07:54:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,192 | ["src/Storages/StorageMemory.cpp", "tests/queries/0_stateless/01497_mutation_support_for_storage_memory.reference", "tests/queries/0_stateless/01497_mutation_support_for_storage_memory.sql"] | Got wrong result when executing mutation on Memory table engine | Clickhouse version: 21.2.5.5
Got wrong result when excuting mutation on Memory table engine.
```sql
CREATE TABLE mem_test(id Int64) ENGINE=Memory();
INSERT INTO mem_test VALUES (1), (2), (3);
ALTER TABLE mem_test UPDATE id=4 WHERE id=1;
SELECT count(*) FROM mem_test;
┌─count()─┐
│ 24 │
└─────────┘
SELECT * FROM mem_test;
┌─id─┐
│ 4 │
│ 2 │
│ 3 │
└────┘
3 rows in set. Elapsed: 0.001 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/24192 | https://github.com/ClickHouse/ClickHouse/pull/24193 | 64baa524b8ce4a2bf2478ad36491a01656b9f18a | cee8e280c78b39a3dbdf115acf08e25fe1691144 | "2021-05-17T10:16:07Z" | c++ | "2021-05-18T07:57:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,158 | ["src/Columns/ReverseIndex.h", "tests/queries/0_stateless/02046_low_cardinality_parallel_group_by.reference", "tests/queries/0_stateless/02046_low_cardinality_parallel_group_by.sh"] | Crash on MV Insert Into Locked Buffer Table With LowCardinality Key Column | ClickHouse 21.4.4
We are testing using buffer tables for materialized views to reduce the number of parts created during inserts. When trying to read from one of those buffer tables (using a Distributed table), ClickHouse crashed on multiple shards. This is the table:
``` CREATE TABLE comcast_xcr.atsec_svc_1h
(
`datetime` DateTime,
`svc_type` LowCardinality(String),
`svc` LowCardinality(String),
`cache_group` LowCardinality(String),
`client_status` Enum8('NONE' = 0, 'SUCCESS' = 1, 'CLIENT_ERROR' = 2, 'SERVER_ERROR' = 3),
`cache_result` Enum8('HIT' = 1, 'MISS' = 2, 'ERROR' = 3),
`event_count` UInt64,
`served_bytes` UInt64,
`parent_bytes` UInt64,
`ttms_avg` AggregateFunction(avg, UInt32),
`ttms_quants` AggregateFunction(quantilesTiming(0.99, 0.95, 0.9), UInt32),
`chi_count` AggregateFunction(uniq, FixedString(16)),
`manifest_count` UInt64,
`fragment_count` UInt64
)
ENGINE = Buffer('comcast_xcr', 'atsec_svc_1h_mt', 1, 30, 300, 10000, 1000000, 1000000, 100000000)
```
Note that this table has only one buffer "layer" (we don't expect a huge number of inserts). It also relies on an implicit cast of String to LowCardinality String.
The crash (on 5 or 6 servers):
```2021.05.15 20:51:20.446923 [ 399962 ] {} <Fatal> BaseDaemon: ########################################
2021.05.15 20:51:20.446960 [ 399962 ] {} <Fatal> BaseDaemon: (version 21.4.4.30 (official build), build id: E3FA92117218D182F17C14F864FF4ED3D3689BFE) (from thread 2448209) (no query) Received signal Segmentation fault (11)
2021.05.15 20:51:20.446976 [ 399962 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2021.05.15 20:51:20.447003 [ 399962 ] {} <Fatal> BaseDaemon: Stack trace: 0xf1439f7 0xf142358 0xf140dce 0xfa59948 0xfd529a0 0xfd52120 0xf4ba024 0xf4c3fdb 0xf4c45fc 0xf4c4679 0xf4bbb6f 0xfd4e458 0xfd4bd03 0xfd4eff5 0xf33b820 0xf33d817 0xf33e5e2 0x8954fef 0x8958a83 0x7fb34a24314a 0x7fb349f74f23
2021.05.15 20:51:20.475409 [ 399962 ] {} <Fatal> BaseDaemon: 1. DB::ReverseIndex<unsigned long, DB::ColumnString>::insert(StringRef const&) @ 0xf1439f7 in /usr/bin/clickhouse
2021.05.15 20:51:20.475428 [ 399962 ] {} <Fatal> BaseDaemon: 2. COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnString>::uniqueInsertRangeImpl<char8_t>(DB::IColumn const&, unsigned long, unsigned long, unsigned long, DB::ColumnVector<char8_t>::MutablePtr&&, DB::ReverseIndex<unsigned long, DB::ColumnString>*, unsigned long) @ 0xf142358 in /usr/bin/clickhouse
2021.05.15 20:51:20.475436 [ 399962 ] {} <Fatal> BaseDaemon: 3. DB::ColumnUnique<DB::ColumnString>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) @ 0xf140dce in /usr/bin/clickhouse
2021.05.15 20:51:20.475447 [ 399962 ] {} <Fatal> BaseDaemon: 4. DB::ColumnLowCardinality::insertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) @ 0xfa59948 in /usr/bin/clickhouse
2021.05.15 20:51:20.475456 [ 399962 ] {} <Fatal> BaseDaemon: 5. DB::BufferBlockOutputStream::insertIntoBuffer(DB::Block const&, DB::StorageBuffer::Buffer&) @ 0xfd529a0 in /usr/bin/clickhouse
2021.05.15 20:51:20.475461 [ 399962 ] {} <Fatal> BaseDaemon: 6. DB::BufferBlockOutputStream::write(DB::Block const&) @ 0xfd52120 in /usr/bin/clickhouse
2021.05.15 20:51:20.475470 [ 399962 ] {} <Fatal> BaseDaemon: 7. DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0xf4ba024 in /usr/bin/clickhouse
2021.05.15 20:51:20.475476 [ 399962 ] {} <Fatal> BaseDaemon: 8. DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0xf4c3fdb in /usr/bin/clickhouse
2021.05.15 20:51:20.475483 [ 399962 ] {} <Fatal> BaseDaemon: 9. DB::SquashingBlockOutputStream::finalize() @ 0xf4c45fc in /usr/bin/clickhouse
2021.05.15 20:51:20.475488 [ 399962 ] {} <Fatal> BaseDaemon: 10. DB::SquashingBlockOutputStream::writeSuffix() @ 0xf4c4679 in /usr/bin/clickhouse
2021.05.15 20:51:20.475493 [ 399962 ] {} <Fatal> BaseDaemon: 11. DB::PushingToViewsBlockOutputStream::writeSuffix() @ 0xf4bbb6f in /usr/bin/clickhouse
2021.05.15 20:51:20.475499 [ 399962 ] {} <Fatal> BaseDaemon: 12. DB::StorageBuffer::writeBlockToDestination(DB::Block const&, std::__1::shared_ptr<DB::IStorage>) @ 0xfd4e458 in /usr/bin/clickhouse
2021.05.15 20:51:20.475506 [ 399962 ] {} <Fatal> BaseDaemon: 13. DB::StorageBuffer::flushBuffer(DB::StorageBuffer::Buffer&, bool, bool, bool) @ 0xfd4bd03 in /usr/bin/clickhouse
2021.05.15 20:51:20.475511 [ 399962 ] {} <Fatal> BaseDaemon: 14. DB::StorageBuffer::backgroundFlush() @ 0xfd4eff5 in /usr/bin/clickhouse
2021.05.15 20:51:20.475519 [ 399962 ] {} <Fatal> BaseDaemon: 15. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0xf33b820 in /usr/bin/clickhouse
2021.05.15 20:51:20.475524 [ 399962 ] {} <Fatal> BaseDaemon: 16. DB::BackgroundSchedulePool::threadFunction() @ 0xf33d817 in /usr/bin/clickhouse
2021.05.15 20:51:20.475529 [ 399962 ] {} <Fatal> BaseDaemon: 17. ? @ 0xf33e5e2 in /usr/bin/clickhouse
2021.05.15 20:51:20.475539 [ 399962 ] {} <Fatal> BaseDaemon: 18. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8954fef in /usr/bin/clickhouse
2021.05.15 20:51:20.475544 [ 399962 ] {} <Fatal> BaseDaemon: 19. ? @ 0x8958a83 in /usr/bin/clickhouse
2021.05.15 20:51:20.475557 [ 399962 ] {} <Fatal> BaseDaemon: 20. start_thread @ 0x814a in /usr/lib64/libpthread-2.28.so
2021.05.15 20:51:20.475567 [ 399962 ] {} <Fatal> BaseDaemon: 21. clone @ 0xfcf23 in /usr/lib64/libc-2.28.so
2021.05.15 20:51:20.576475 [ 399962 ] {} <Fatal> BaseDaemon: Checksum of the binary: 21B45BF98BF6821B2FE099092F5117E8, integrity check passed.
2021.05.15 20:51:42.120885 [ 2446750 ] {} <Fatal> Application: Child process was terminated by signal 11.
```
Everything seems fine if we don't try to read from the buffer table. | https://github.com/ClickHouse/ClickHouse/issues/24158 | https://github.com/ClickHouse/ClickHouse/pull/29782 | 754e038eec7a77895f8c57cadb933c828eb52779 | 25e2ebac75de1972bcb9db8b18e326afaf6679ab | "2021-05-15T21:32:18Z" | c++ | "2021-10-07T13:27:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,128 | ["src/DataTypes/getLeastSupertype.cpp", "src/Functions/DateTimeTransforms.h", "tests/queries/0_stateless/00735_long_conditional.reference", "tests/queries/0_stateless/01925_date_date_time_comparison.reference", "tests/queries/0_stateless/01925_date_date_time_comparison.sql", "tests/queries/0_stateless/01926_date_date_time_supertype.reference", "tests/queries/0_stateless/01926_date_date_time_supertype.sql"] | TimeZone is ignored in function if with Date and DateTime | ```
qoega-dev.sas.yp-c.yandex.net :) SELECT toDate('2000-01-01') < toDateTime('2000-01-01 00:00:01', 'Europe/Moscow');
SELECT toDate('2000-01-01') < toDateTime('2000-01-01 00:00:01', 'Europe/Moscow')
Query id: baf63ad5-1893-401c-9728-63bd5c295df1
┌─less(toDate('2000-01-01'), toDateTime('2000-01-01 00:00:01', 'Europe/Moscow'))─┐
│ 0 │
└────────────────────────────────────────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.011 sec.
qoega-dev.sas.yp-c.yandex.net :) SELECT toDate('2000-01-01') < toDateTime('2000-01-01 00:00:01');
SELECT toDate('2000-01-01') < toDateTime('2000-01-01 00:00:01')
Query id: c2136446-a3a0-4567-a94e-68f40dc1f158
┌─less(toDate('2000-01-01'), toDateTime('2000-01-01 00:00:01'))─┐
│ 1 │
└───────────────────────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.011 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/24128 | https://github.com/ClickHouse/ClickHouse/pull/24129 | d5bd2b1fa9844cf290049ca143aefddc1e8c72a8 | 4bb3fd53f7ce6e2e75a4449d76eea0b81884699a | "2021-05-14T16:04:28Z" | c++ | "2021-06-29T17:13:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,126 | ["programs/server/Server.cpp", "src/Access/AccessControl.cpp", "src/Access/Common/AuthenticationData.cpp", "src/Access/GSSAcceptor.cpp", "src/Access/GSSAcceptor.h", "src/Access/UsersConfigAccessStorage.cpp"] | Unexpected behaviour when bad hash in password_sha256_hex | **Describe the unexpected behaviour**
If, when creating a user via xml in the password_sha256_hex field, enter an incorrect hash, then ClickHouse will crash without correct error description
**How to reproduce**
```
<yandex>
<users>
<testuser>
<password_sha256_hex>a95aff94b1f06ec175c359cd1df425c8bb53a1c9c50596ceed8ecbdba9c5752r</password_sha256_hex>
<profile>readonly</profile>
</testuser>
</users>
</yandex>
```
**Expected behavior**
Some human readable error in logs. Maybe just skip this user without crash.
**Error message and/or stacktrace**
/var/log/clickhouse-server/clickhouse-server.log
```
2021.05.14 17:26:08.911812 [ 25348 ] {} <Information> Application: Will watch for the process with pid 25350
2021.05.14 17:26:08.911957 [ 25350 ] {} <Information> Application: Forked a child process to watch
2021.05.14 17:26:08.912391 [ 25350 ] {} <Information> SentryWriter: Sending crash reports is disabled
2021.05.14 17:26:08.912513 [ 25350 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2021.05.14 17:26:08.966801 [ 25350 ] {} <Information> : Starting ClickHouse 21.4.3.21 with revision 54449, build id: 4E8F6FDA87D4EEA23848BE55E4AFF2B2E87C8553, PID 25350
2021.05.14 17:26:08.966941 [ 25350 ] {} <Information> Application: starting up
2021.05.14 17:26:09.092420 [ 25350 ] {} <Information> Application: Calculated checksum of the binary: 1C0FC2BE13EADA0AAFC7B8FB103A0FE8, integrity check passed.
2021.05.14 17:26:09.092580 [ 25350 ] {} <Trace> Application: Will do mlock to prevent executable memory from being paged out. It may take a few seconds.
2021.05.14 17:26:09.105372 [ 25350 ] {} <Trace> Application: The memory map of clickhouse executable has been mlock'ed, total 186.13 MiB
2021.05.14 17:26:09.105515 [ 25350 ] {} <Debug> Application: rlimit on number of file descriptors is 500000
2021.05.14 17:26:09.105529 [ 25350 ] {} <Debug> Application: Initializing DateLUT.
2021.05.14 17:26:09.105537 [ 25350 ] {} <Trace> Application: Initialized DateLUT with time zone 'Europe/Moscow'.
2021.05.14 17:26:09.105556 [ 25350 ] {} <Debug> Application: Setting up /var/lib/clickhouse/tmp/ to store temporary data in it
2021.05.14 17:26:09.105657 [ 25350 ] {} <Debug> Application: Skipped file in temporary path /var/lib/clickhouse/tmp/data
2021.05.14 17:26:09.106004 [ 25350 ] {} <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '***' as replica host.
2021.05.14 17:26:09.106179 [ 25350 ] {} <Information> SensitiveDataMaskerConfigRead: 1 query masking rules loaded.
2021.05.14 17:26:09.106975 [ 25350 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/config.xml'
2021.05.14 17:26:09.110256 [ 25350 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performing update on configuration
2021.05.14 17:26:09.111302 [ 25350 ] {} <Information> Application: Setting max_server_memory_usage was set to 44.23 GiB (49.14 GiB available * 0.90 max_server_memory_usage_to_ram_ratio)
2021.05.14 17:26:09.112183 [ 25350 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/config.xml', performed update on configuration
2021.05.14 17:26:09.115887 [ 25350 ] {} <Debug> ConfigReloader: Loading config '/etc/clickhouse-server/users.xml'
2021.05.14 17:26:09.116431 [ 25350 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2021.05.14 17:26:09.118825 [ 25350 ] {} <Error> Application: std::exception
2021.05.14 17:26:09.118859 [ 25350 ] {} <Information> Application: shutting down
2021.05.14 17:26:09.118870 [ 25350 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2021.05.14 17:26:09.119059 [ 25351 ] {} <Trace> BaseDaemon: Received signal -2
2021.05.14 17:26:09.119101 [ 25351 ] {} <Information> BaseDaemon: Stop SignalListener thread
2021.05.14 17:26:09.138296 [ 25348 ] {} <Information> Application: Child process exited normally with code 70.
```
/var/log/clickhouse-server/clickhouse-server.err.log
```
2021.05.14 17:26:09.118825 [ 25350 ] {} <Error> Application: std::exception
```
**Additional context**
ClickHouse version: 21.4.3.21
OS: Ubuntu 16.04 | https://github.com/ClickHouse/ClickHouse/issues/24126 | https://github.com/ClickHouse/ClickHouse/pull/31557 | 526044af8a8ce85eca993e19fe2def4bb15873e8 | 9a0d98fa6d5ffda30a2243e607040024fd02645c | "2021-05-14T14:29:17Z" | c++ | "2021-11-20T16:47:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,123 | ["src/Interpreters/getHeaderForProcessingStage.cpp", "src/Interpreters/getHeaderForProcessingStage.h", "src/Storages/StorageMaterializedView.cpp", "src/Storages/StorageMerge.cpp", "tests/queries/0_stateless/01890_materialized_distributed_join.reference", "tests/queries/0_stateless/01890_materialized_distributed_join.sh", "tests/queries/0_stateless/arcadia_skip_list.txt"] | Missing columns exception when joining Distributed Materialized View | **Describe the bug**
Using column from right side of join where left side is Distributed Materialized View in select clause returns 'Missing columns' exception;
The column can be used in where condition without a problem.
**Does it reproduce on recent release?**
Yes
**How to reproduce**
* Which ClickHouse server version to use: 21.4.6 revision 54447
* `CREATE TABLE` statements for all tables involved
```
create table test.test_shard ON CLUSTER '{cluster}' (k UInt64, v UInt64) ENGINE ReplicatedMergeTree() ORDER BY (k)
create table test.test_local ON CLUSTER '{cluster}' (k UInt64, v UInt64) ENGINE MergeTree() ORDER BY (k)
create materialized view test.test_distributed ON CLUSTER '{cluster}' engine Distributed('{cluster}', 'test', 'test_shard', k) as select k,v from test.test_source
create table test.test_source ON CLUSTER '{cluster}' (k UInt64, v UInt64) ENGINE MergeTree() ORDER BY (k)
```
Join between Distributed Materialized View and MergeTree is not working
```
select * from test.test_distributed td asof join test.test_local tl on td.k = tl.k and td.v < tl.v;
```
Error:
```
Received exception from server (version 21.4.6):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'tl.v' 'tl.k' while processing query: 'SELECT k, v, tl.k, tl.v FROM test.test_distributed AS td', required columns: 'k' 'v' 'tl.k' 'tl.v' 'k' 'v' 'tl.k' 'tl.v'.
```
Join with one Shard is working
```
select * from test.test_shard td asof join test.test_local tl on td.k = tl.k and td.v < tl.v;
```
Full exception
```
2021.05.14 11:02:12.054811 [ 168 ] {4b3406f9-72c5-497c-b97b-6f13e3f59633} <Error> TCPHandler: Code: 47, e.displayText() = DB::Exception: Missing columns: 'tl.v' 'tl.k' while processing query: 'SELECT k, v, tl.k, tl.v FROM test.test_distributed AS td', required columns: 'k' 'v' 'tl.k' 'tl.v' 'k' 'v' 'tl.k' 'tl.v', Stack trace:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x87f714a in /usr/bin/clickhouse
1. DB::TreeRewriterResult::collectUsedColumns(std::__1::shared_ptr<DB::IAST> const&, bool) @ 0xf6d0a90 in /usr/bin/clickhouse
2. DB::TreeRewriter::analyzeSelect(std::__1::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::__1::vector<DB::TableWithColumnNamesAndTypes, std::__1::allocator<DB::TableWithColumnNamesAndTypes> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::TableJoin>) const @ 0xf6d72e4 in /usr/bin/clickhouse
3. ? @ 0xf268c4e in /usr/bin/clickhouse
4. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0xf264e32 in /usr/bin/clickhouse
5. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, DB::SelectQueryOptions const&) @ 0xf267584 in /usr/bin/clickhouse
6. DB::getHeaderForProcessingStage(DB::IStorage const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, DB::Context const&, DB::QueryProcessingStage::Enum) @ 0xf7458d6 in /usr/bin/clickhouse
7. DB::StorageMaterializedView::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo&, DB::Context const&, DB::QueryProcessingStage::Enum, unsigned long, unsigned int) @ 0xf983b69 in /usr/bin/clickhouse
8. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0xf276e2e in /usr/bin/clickhouse
9. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>) @ 0xf26c581 in /usr/bin/clickhouse
10. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0xf26b39b in /usr/bin/clickhouse
11. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0xf592583 in /usr/bin/clickhouse
12. DB::InterpreterSelectWithUnionQuery::execute() @ 0xf59370e in /usr/bin/clickhouse
13. ? @ 0xf7356e2 in /usr/bin/clickhouse
14. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xf734023 in /usr/bin/clickhouse
15. DB::TCPHandler::runImpl() @ 0xfeebdcd in /usr/bin/clickhouse
16. DB::TCPHandler::run() @ 0xfefe399 in /usr/bin/clickhouse
17. Poco::Net::TCPServerConnection::start() @ 0x125b865f in /usr/bin/clickhouse
18. Poco::Net::TCPServerDispatcher::run() @ 0x125ba071 in /usr/bin/clickhouse
19. Poco::PooledThread::run() @ 0x126f0799 in /usr/bin/clickhouse
20. Poco::ThreadImpl::runnableEntry(void*) @ 0x126ec5fa in /usr/bin/clickhouse
21. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
22. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
What can I do ?
| https://github.com/ClickHouse/ClickHouse/issues/24123 | https://github.com/ClickHouse/ClickHouse/pull/24870 | fab7c9c7f63a905313c9620f9bce69457fc6395b | 6af272891a06a16f0994989169c8717337b0c469 | "2021-05-14T09:52:27Z" | c++ | "2021-06-18T08:18:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,110 | ["src/Common/CurrentMemoryTracker.cpp", "src/Common/CurrentMemoryTracker.h", "src/Common/MemoryTracker.cpp", "src/Common/MemoryTracker.h", "src/Common/new_delete.cpp"] | MemoryTracker should be used in operator new in a light way | **Use case**
Very rare deadlock is possible.
**Describe the solution you'd like**
If MemoryTracker is called from operator new, don't:
- log anything (1);
- throw (2).
(1) is strictly necessarily to avoid deadlock;
(2) is almost necessarily to avoid tons of peculiarities in destructors. | https://github.com/ClickHouse/ClickHouse/issues/24110 | https://github.com/ClickHouse/ClickHouse/pull/24483 | a7e95b4ee2284fac1b6c1f466f89e0c1569f5d72 | e5b78723bb6fc3d5f7f6789f6514feb96c097f34 | "2021-05-13T21:07:52Z" | c++ | "2021-05-27T07:35:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,093 | ["src/Storages/tests/gtest_transform_query_for_external_database.cpp"] | Issues querying foreign tables with functions in where statements | When using a foreign table (e.g. a mysql table) clickhouse doesn't seem to properly send the result of functions if they are called inside `WHERE` statements.
Executing `SELECT foo FROM foreign_table WHERE a=10 AND b='2019-10-05'` will send a statement like:
```
SELECT `foo`, `a`, `b` FROM `db`.`foreign_table` WHERE (`a` = 10) AND (`b` = '2019-10-05')
```
But executing `SELECT foo FROM foreign_table WHERE a=10 AND b=toDate('2019-10-05')` will send:
```
SELECT `foo`, `a`, `b` FROM `db`.`foreign_table` WHERE (`a` = 10)
```
I assume that the function is executed afterwards and `b` is parsed clickhouse-side.
I'm not sure if there is an easy fix for this, but I assume this would help with the efficiency of mysql foreign tables and might be critical in some high-load scenarios where one wants to filter on indexed columns.
My specific issue here is different and relates to the clickhouse mindsdb integration, but I think this "bug" (if you'd call it such) is more generic. | https://github.com/ClickHouse/ClickHouse/issues/24093 | https://github.com/ClickHouse/ClickHouse/pull/40476 | 2b0c62ab80d1d43a2c7dc2fbe24db0124efe18d4 | d370c45137db9b140ae6132a72f65d402f258974 | "2021-05-13T14:19:09Z" | c++ | "2022-08-22T12:37:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,075 | ["src/Processors/QueryPipeline.cpp", "tests/performance/join_max_streams.xml"] | version 21.6 is much slower than 21.4 |
I got the binary from follwing url then strip it
```
wget -c https://builds.clickhouse.tech/master/aarch64/clickhouse
kylin@kylin-gtj:~$ strip clickhouse
kylin@kylin-gtj:~$ ll cl*
-rw-r--r-- 1 kylin kylin 234645184 5月 13 08:21 clickhouse
kylin@kylin-gtj:~$ cd ch
kylin@kylin-gtj:~/ch$ ll
总用量 242132
drwxrwxr-x 13 kylin kylin 4096 5月 11 16:28 ./
drwx------ 39 kylin kylin 4096 5月 13 08:21 ../
-rwxrwxr-x 1 kylin kylin 219328512 3月 26 15:24 clickhouse*
drwxr-x--- 6 kylin kylin 4096 5月 11 13:56 data/
drwxr-x--- 2 kylin kylin 4096 3月 26 15:25 dictionaries_lib/
drwxr-x--- 2 kylin kylin 4096 3月 26 15:25 flags/
drwxr-x--- 2 kylin kylin 4096 3月 26 15:25 format_schemas/
drwxr-x--- 2 kylin kylin 4096 3月 30 09:45 metadata/
drwxr-x--- 2 kylin kylin 4096 4月 29 13:31 metadata_dropped/
-rw------- 1 kylin kylin 28550256 5月 11 16:28 nohup.out
drwxr-x--- 2 kylin kylin 4096 3月 26 15:25 preprocessed_configs/
drwxr-x--- 4 kylin kylin 4096 4月 8 09:47 shadow/
drwxr-x--- 53 kylin kylin 4096 4月 29 13:23 store/
drwxr-x--- 3 kylin kylin 4096 3月 31 12:50 tmp/
drwxr-x--- 2 kylin kylin 4096 3月 26 15:25 user_files/
```
then rename the old binary file to clickhouse214 and satrt new file and run a query.
```
kylin@kylin-gtj:~/ch$ mv clickhouse clickhouse214
kylin@kylin-gtj:~/ch$ mv ../clickhouse .
kylin@kylin-gtj:~/ch$ chmod +x clickhouse
kylin@kylin-gtj:~/ch$ nohup ./clickhouse server &
[1] 3369
kylin@kylin-gtj:~/ch$ nohup: 忽略输入并把输出追加到'nohup.out'
kylin@kylin-gtj:~/ch$ mysql --protocol tcp -u default -P 9004
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 0
Server version: 21.6.1.6818-ClickHouse
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use pop
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> with t as(
-> select agg,age1,d6,d4,count(*)cnt from ren group by cube(agg,age1,d6,d4)),
-> st as (
-> select agg,age1,d6,d4 from aggs,age1s,d6s,d4s where age1=0 or toUInt8((age1+4)/5)=agg order by d6,d4)
-> select agg,age1,arraySlice(groupArray(cnt),1,10)ac from st left join t using(agg,age1,d6,d4) group by agg,age1 order by agg,age1
-> ;
+------+------+---------------------------------------------------------------------------------------+
| agg | age1 | ac |
+------+------+---------------------------------------------------------------------------------------+
126 rows in set (9.99 sec)
Read 128000189 rows, 2.01 GiB in 9.98721409 sec., 12816405 rows/sec., 205.82 MiB/sec.
mysql> explain with t as( select agg,age1,d6,d4,count(*)cnt from ren group by cube(agg,age1,d6,d4)), st as ( select agg,age1,d6,d4 from aggs,age1s,d6s,d4s where age1=0 or toUInt8((age1+4)/5)=agg order by d6,d4) select agg,age1,arraySlice(groupArray(cnt),1,10)ac from st left join t using(agg,age1,d6,d4) group by agg,age1 order by agg,age1;
+------------------------------------------------------------------------------------------------------------------------------------+
| explain |
+------------------------------------------------------------------------------------------------------------------------------------+
| Expression (Projection) |
| MergingSorted (Merge sorted streams for ORDER BY) |
| MergeSorting (Merge sorted blocks for ORDER BY) |
| PartialSorting (Sort each block for ORDER BY) |
| Expression (Before ORDER BY) |
| Aggregating |
| Expression (Before GROUP BY) |
| Join (JOIN) |
| Expression ((Before JOIN + Projection)) |
| MergingSorted (Merge sorted streams for ORDER BY) |
| MergeSorting (Merge sorted blocks for ORDER BY) |
| PartialSorting (Sort each block for ORDER BY) |
| Expression (Before ORDER BY) |
| Filter (WHERE) |
| Join (JOIN) |
| Expression ((Before JOIN + (Projection + Before ORDER BY))) |
| Filter (WHERE) |
| Join (JOIN) |
| Expression ((Before JOIN + (Projection + Before ORDER BY))) |
| Filter (WHERE) |
| Join (JOIN) |
| Expression (Before JOIN) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| Expression ((Joined actions + (Rename joined columns + (Projection + Before ORDER BY)))) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| Expression ((Joined actions + (Rename joined columns + (Projection + Before ORDER BY)))) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| Expression ((Joined actions + (Rename joined columns + (Projection + Before ORDER BY)))) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| Expression ((Joined actions + (Rename joined columns + (Projection + Before ORDER BY)))) |
| Cube |
| Aggregating |
| Expression (Before GROUP BY) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromMergeTree |
+------------------------------------------------------------------------------------------------------------------------------------+
39 rows in set (0.00 sec)
Read 39 rows, 2.82 KiB in 0.00643683 sec., 6058 rows/sec., 438.30 KiB/sec.
```
the query took about 10 seconds, but each of the sub-query are all very fast (ren has 128000000 rows, t has 7917 rows, and st has 22302 rows )
```
mysql> with t as( select agg,age1,d6,d4,count(*)cnt from ren group by cube(agg,age1,d6,d4))select count(*) from t;
+---------+
| count() |
+---------+
| 7917 |
+---------+
1 row in set (1.33 sec)
Read 128000000 rows, 2.01 GiB in 1.32724088 sec., 96440670 rows/sec., 1.51 GiB/sec.
mysql> with st as ( select agg,age1,d6,d4 from aggs,age1s,d6s,d4s where age1=0 or toUInt8((age1+4)/5)=agg order by d6,d4)select count(*) from st;
+---------+
| count() |
+---------+
| 22302 |
+---------+
1 row in set (0.01 sec)
Read 189 rows, 189.00 B in 0.01041345 sec., 18149 rows/sec., 17.72 KiB/sec.
mysql> select count(*) from ren;
+-----------+
| count() |
+-----------+
| 128000000 |
+-----------+
1 row in set (0.00 sec)
Read 1 rows, 4.01 KiB in 0.00057369 sec., 1743 rows/sec., 6.82 MiB/sec.
```
then I quit the version: 21.6, restart the version 2.14.
the same query took about 1.4 seconds.
```
kylin@kylin-gtj:~/ch$ pgrep clickhouse
3370
kylin@kylin-gtj:~/ch$ kill 3370
kylin@kylin-gtj:~/ch$ pgrep clickhouse
[1]+ 已完成 nohup ./clickhouse server
kylin@kylin-gtj:~/ch$ nohup ./clickhouse214 server &
[1] 3931
kylin@kylin-gtj:~/ch$ nohup: 忽略输入并把输出追加到'nohup.out'
kylin@kylin-gtj:~/ch$ mysql --protocol tcp -u default -P 9004
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 0
Server version: 21.4.1.6351-ClickHouse
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use pop
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> with t as( select agg,age1,d6,d4,count(*)cnt from ren group by cube(agg,age1,d6,d4)), st as ( select agg,age1,d6,d4 from aggs,age1s,d6s,d4s where age1=0 or toUInt8((age1+4)/5)=agg order by d6,d4) select agg,age1,arraySlice(groupArray(cnt),1,10)ac from st left join t using(agg,age1,d6,d4) group by agg,age1 order by agg,age1;
+------+------+---------------------------------------------------------------------------------------+
| agg | age1 | ac |
+------+------+---------------------------------------------------------------------------------------+
126 rows in set (1.41 sec)
Read 128000189 rows, 2.01 GiB in 1.40947438 sec., 90814129 rows/sec., 1.42 GiB/sec.
mysql> select count(*) from ren;
+-----------+
| count() |
+-----------+
| 128000000 |
+-----------+
1 row in set (0.00 sec)
Read 1 rows, 4.01 KiB in 0.00055921 sec., 1788 rows/sec., 7.00 MiB/sec.
mysql> with st as ( select agg,age1,d6,d4 from aggs,age1s,d6s,d4s where age1=0 or toUInt8((age1+4)/5)=agg order by d6,d4)select count(*) from st;
+---------+
| count() |
+---------+
| 22302 |
+---------+
1 row in set (0.01 sec)
Read 189 rows, 189.00 B in 0.00977349 sec., 19338 rows/sec., 18.88 KiB/sec.
mysql> with t as( select agg,age1,d6,d4,count(*)cnt from ren group by cube(agg,age1,d6,d4))select count(*) from t;
+---------+
| count() |
+---------+
| 7917 |
+---------+
1 row in set (1.33 sec)
Read 128000000 rows, 2.01 GiB in 1.33491026 sec., 95886595 rows/sec., 1.50 GiB/sec.
mysql> explain with t as( select agg,age1,d6,d4,count(*)cnt from ren group by cube(agg,age1,d6,d4)), st as ( select agg,age1,d6,d4 from aggs,age1s,d6s,d4s where age1=0 or toUInt8((age1+4)/5)=agg order by d6,d4) select agg,age1,arraySlice(groupArray(cnt),1,10)ac from st left join t using(agg,age1,d6,d4) group by agg,age1 order by agg,age1;
+---------------------------------------------------------------------------------------------------------------------------------+
| explain |
+---------------------------------------------------------------------------------------------------------------------------------+
| Expression (Projection) |
| MergingSorted (Merge sorted streams for ORDER BY) |
| MergeSorting (Merge sorted blocks for ORDER BY) |
| PartialSorting (Sort each block for ORDER BY) |
| Expression (Before ORDER BY) |
| CreatingSets (Create sets before main query execution) |
| Aggregating |
| Expression (Before GROUP BY) |
| Join (JOIN) |
| Expression ((Before JOIN + Projection)) |
| MergingSorted (Merge sorted streams for ORDER BY) |
| MergeSorting (Merge sorted blocks for ORDER BY) |
| PartialSorting (Sort each block for ORDER BY) |
| CreatingSets (Create sets before main query execution) |
| Expression (Before ORDER BY) |
| Filter (WHERE) |
| Join (JOIN) |
| Expression ((Before JOIN + Projection)) |
| CreatingSets (Create sets before main query execution) |
| Expression (Before ORDER BY) |
| Filter (WHERE) |
| Join (JOIN) |
| Expression ((Before JOIN + Projection)) |
| CreatingSets (Create sets before main query execution) |
| Expression (Before ORDER BY) |
| Filter (WHERE) |
| Join (JOIN) |
| Expression (Before JOIN) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| CreatingSet (Create set for JOIN) |
| Expression ((Projection + Before ORDER BY)) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| CreatingSet (Create set for JOIN) |
| Expression ((Projection + Before ORDER BY)) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| CreatingSet (Create set for JOIN) |
| Expression ((Projection + Before ORDER BY)) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (Log) |
| CreatingSet (Create set for JOIN) |
| Expression ((Projection + Before ORDER BY)) |
| Cube |
| Aggregating |
| Expression (Before GROUP BY) |
| SettingQuotaAndLimits (Set limits and quota after reading from storage) |
| ReadFromStorage (MergeTree) |
+---------------------------------------------------------------------------------------------------------------------------------+
49 rows in set (0.01 sec)
Read 49 rows, 3.56 KiB in 0.00660192 sec., 7422 rows/sec., 538.58 KiB/sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/24075 | https://github.com/ClickHouse/ClickHouse/pull/26052 | 2062ddec90f526ec908e5eab6a88e96f631b6ef5 | f068defdd84921e0d89c7b2b1eb8308ec7813a05 | "2021-05-13T01:03:15Z" | c++ | "2021-07-08T14:18:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,060 | ["website/css/main.css", "website/src/scss/_variables.scss", "website/src/scss/components/_navbar.scss", "website/templates/docs/content.html"] | When anchor is used, it's hidden behind header on documentation pages | (you don't have to strictly follow this form)
**Describe the issue**
When an anchor is used, the page is scrolled too much and the relevant place hidden by gray header row. For example, https://clickhouse.tech/docs/en/operations/settings/settings/#cancel-http-readonly-queries-on-client-close: only `Default value: 0` is visible to me
**Additional context**
Browser: Firefox 88.0-1 on Arch linux
| https://github.com/ClickHouse/ClickHouse/issues/24060 | https://github.com/ClickHouse/ClickHouse/pull/32812 | b0330d880159f0933604f6619abbe33695b44ca0 | 3c978dfbf9cddee464dc7819e48bc1084272a3ec | "2021-05-12T14:56:48Z" | c++ | "2021-12-16T13:12:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,035 | ["src/Common/StringUtils/StringUtils.h", "tests/queries/0_stateless/01932_null_valid_identifier.reference", "tests/queries/0_stateless/01932_null_valid_identifier.sql", "tests/queries/0_stateless/arcadia_skip_list.txt", "tests/queries/skip_list.json"] | Can't create table on cluster when order by `null` | My clickhouse version is 21.3.9.83, I want to create table contains a field named `null`, I can create it successfully on local table:
```sql
vm2173 :) CREATE TABLE default.test_table (
:-] `id` Int64 ,
:-] `day` DateTime,
:-] `null` Int32 )
:-] ENGINE = MergeTree()
:-] PARTITION BY toYYYYMMDD(`day`)
:-] ORDER BY (`day`, `null`);
CREATE TABLE default.test_table
(
`id` Int64,
`day` DateTime,
`null` Int32
)
ENGINE = MergeTree
PARTITION BY toYYYYMMDD(day)
ORDER BY (day, null)
Query id: 6eb6df0e-1934-4ea3-86cc-742961c7b974
Ok.
0 rows in set. Elapsed: 0.016 sec.
```
but when I create it on cluster, I get error like this: `Column NULL with type Nullable(Nothing) is not allowed in key expression`:
```sql
vm2173 :) CREATE TABLE default.test_table ON CLUSTER eoi (
:-] `id` Int64 ,
:-] `day` DateTime,
:-] `null` Int32 )
:-] ENGINE = MergeTree()
:-] PARTITION BY toYYYYMMDD(`day`)
:-] ORDER BY (`day`, `null`);
CREATE TABLE default.test_table ON CLUSTER eoi
(
`id` Int64,
`day` DateTime,
`null` Int32
)
ENGINE = MergeTree
PARTITION BY toYYYYMMDD(day)
ORDER BY (day, null)
Query id: e1afba47-e631-4087-a0b5-944b014d6a8b
┌─host─────┬─port─┬─status─┬─error────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─num_hosts_remaining─┬─num_hosts_active─┐
│ worker-1 │ 9000 │ 549 │ Code: 549, e.displayText() = DB::Exception: Column `NULL` with type Nullable(Nothing) is not allowed in key expression, it's not comparable (version 21.3.9.83 (official build)) │ 2 │ 0 │
│ worker-2 │ 9000 │ 549 │ Code: 549, e.displayText() = DB::Exception: Column `NULL` with type Nullable(Nothing) is not allowed in key expression, it's not comparable (version 21.3.9.83 (official build)) │ 1 │ 0 │
│ vm2173 │ 9000 │ 549 │ Code: 549, e.displayText() = DB::Exception: Column `NULL` with type Nullable(Nothing) is not allowed in key expression, it's not comparable (version 21.3.9.83 (official build)) │ 0 │ 0 │
└──────────┴──────┴────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────┴──────────────────┘
→ Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.) 0%
3 rows in set. Elapsed: 0.112 sec.
Received exception from server (version 21.3.9):
Code: 549. DB::Exception: Received from localhost:9000. DB::Exception: There was an error on [worker-1:9000]: Code: 549, e.displayText() = DB::Exception: Column `NULL` with type Nullable(Nothing) is not allowed in key expression, it's not comparable (version 21.3.9.83 (official build)).
```
I want to know why it happends.
| https://github.com/ClickHouse/ClickHouse/issues/24035 | https://github.com/ClickHouse/ClickHouse/pull/25907 | 523155a020599263b2a11cc9cdbda96b0fcf288c | 48e325502ed8c703fdff21fc68cce8f02736f4c1 | "2021-05-12T02:12:54Z" | c++ | "2021-07-02T12:36:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,029 | ["src/DataTypes/DataTypeUUID.h", "tests/queries/0_stateless/01869_reinterpret_as_fixed_string_uuid.reference", "tests/queries/0_stateless/01869_reinterpret_as_fixed_string_uuid.sql"] | reinterpretAsFixedString for UUID stopped working | **Describe the bug**
The following statement started to return an error.
```
SELECT reinterpretAsFixedString(toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0'))
```
**Does it reproduce on recent release?**
Current master
**How to reproduce**
```
Description
Received exception from server (version 21.6.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Cannot reinterpret UUID as FixedString because it is not fixed size and contiguous in memory: While processing hex(encrypt('aes-128-ecb', reinterpretAsFixedString(toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0')), '1111111111111111')).
```
**Expected behavior**
```
ClickHouse client version 21.6.1.6689 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.6.1 revision 54448.
user-host :) SELECT reinterpretAsFixedString(toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0'))
SELECT reinterpretAsFixedString(toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0'))
Query id: fe58828e-6290-4f97-bd80-9589de123de3
┌─reinterpretAsFixedString(toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0'))─┐
│ ��\��a���j�{� │
└──────────────────────────────────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.005 sec.
user-host :)
```
| https://github.com/ClickHouse/ClickHouse/issues/24029 | https://github.com/ClickHouse/ClickHouse/pull/24177 | 4a84c2f3ea1fc17d8610f25947685ab60b0101c1 | 49d82ee408c73e35b7d2ca02f2e99ef3e59572f1 | "2021-05-11T21:15:01Z" | c++ | "2021-05-18T05:58:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,024 | ["src/Interpreters/ActionsDAG.cpp", "src/Processors/QueryPlan/UnionStep.cpp", "tests/queries/0_stateless/01881_union_header_mismatch_bug.reference", "tests/queries/0_stateless/01881_union_header_mismatch_bug.sql"] | 21.5 Block structure mismatch in Pipe::unitePipes stream: different number of columns |
```
select *
from (
select 'table' as table, toInt64(10) as rows, toInt64(101) as elements
union all
select 'another table' as table, toInt64(0) as rows, toInt64(0) as elements
)
where rows - elements <> 0
DB::Exception: Block structure mismatch in Pipe::unitePipes stream: different number of columns:
table String String(size = 0), rows Int64 Int64(size = 0), elements Int64 Int64(size = 0)
table String String(size = 0), rows Int64 Int64(size = 0), elements Int64 Int64(size = 0), dummy UInt8 UInt8(size = 0) (version 21.5.3.1 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/24024 | https://github.com/ClickHouse/ClickHouse/pull/24463 | ea97eee326a94c9a23d1b4349f660a17ad29b551 | d4998909b666e7129ca14221e6546b457086e16f | "2021-05-11T19:23:47Z" | c++ | "2021-05-25T09:04:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,020 | ["src/Functions/FunctionsComparison.h", "src/Interpreters/ExpressionJIT.cpp", "tests/performance/column_column_comparison.xml", "tests/queries/0_stateless/00911_tautological_compare.reference", "tests/queries/0_stateless/00911_tautological_compare.sql", "tests/queries/0_stateless/01855_jit_comparison_constant_result.reference", "tests/queries/0_stateless/01855_jit_comparison_constant_result.sql"] | Cannot convert column `less(c0, c1)` because it is non constant in source stream but must be constant in result | ``` sql
CREATE TABLE IF NOT EXISTS t0 (c0 Int32, c1 ALIAS c0) ENGINE = Memory() ;
INSERT INTO t0(c0) VALUES (-341725483);
INSERT INTO t0(c0) VALUES (-781504249), (-1079304144);
INSERT INTO t0(c0) VALUES (-1959608342), (1263536992), (1194580216);
INSERT INTO t0(c0) VALUES (-1639118561), (436683656), (275548633);
INSERT INTO t0(c0) VALUES (-871574473), (-397603882);
INSERT INTO t0(c0) VALUES (-1307576992), (-2034446574);
INSERT INTO t0(c0) VALUES (-1717404319), (-2004827833);
INSERT INTO t0(c0) VALUES (-1604878293), (172290479), (-1808153737);
INSERT INTO t0(c0) VALUES (-1808153737);
INSERT INTO t0(c0) VALUES (1196759186);
INSERT INTO t0(c0) VALUES (-1748576439);
INSERT INTO t0(c0) VALUES (2051389761), (1204228597);
INSERT INTO t0(c0) VALUES (1651756927), (681110175);
INSERT INTO t0(c0) VALUES (-1156920855);
INSERT INTO t0(c0) VALUES (-772388572), (-537624259), (-370697325);
INSERT INTO t0(c0) VALUES (2022909935);
INSERT INTO t0(c0) VALUES (713759928), (1172984613);
INSERT INTO t0(c0) VALUES (1739371629), (-111374952);
INSERT INTO t0(c0) VALUES (139603219), (-2131108571);
INSERT INTO t0(c0) VALUES (-1549239644), (784065114), (420077195);
INSERT INTO t0(c0) VALUES (-1843334206), (-6457985);
INSERT INTO t0(c0) VALUES (-638819909), (-1909459444), (-943510506);
INSERT INTO t0(c0) VALUES (1894781567), (-1985859792), (-476468556);
INSERT INTO t0(c0) VALUES (-618823782), (460423584);
INSERT INTO t0(c0) VALUES (-940596656);
INSERT INTO t0(c0) VALUES (-857101409), (-397603882);
INSERT INTO t0(c0) VALUES (-428805410), (2051389761);
INSERT INTO t0(c0) VALUES (297657448);
INSERT INTO t0(c0) VALUES (-163039758);
SELECT MIN(((t0.c0)<(t0.c1))) FROM t0 SETTINGS aggregate_functions_null_for_empty = 1 FORMAT TabSeparatedWithNamesAndTypes;
```
```
2021.05.11 14:34:57.075596 [ 200 ] {c0304913-59cb-4210-8f3e-7f22acec69c4} <Error> DynamicQueryHandler: Code: 44, e.displayText() = DB::Exception: Cannot convert column `less(c0, c1)` because it is non constant in source stream but must be constant in result, Stack trace (when copying this message, always include the lines below):
0. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/exception:133: std::exception::capture() @ 0x120b19c8 in /usr/bin/clickhouse
1. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/exception:111: std::exception::exception() @ 0x120b1995 in /usr/bin/clickhouse
2. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x214f78c3 in /usr/bin/clickhouse
3. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:55: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x12092150 in /usr/bin/clickhouse
4. ./obj-x86_64-linux-gnu/../src/Interpreters/ActionsDAG.cpp:819: DB::ActionsDAG::makeConvertingActions(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, DB::ActionsDAG::MatchColumnsMode, bool, bool, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >*) @ 0x1c0d120a in /usr/bin/clickhouse
5. ./obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ExpressionStep.cpp:66: DB::ExpressionStep::transformPipeline(DB::QueryPipeline&, DB::BuildQueryPipelineSettings const&) @ 0x1db7d464 in /usr/bin/clickhouse
6. ./obj-x86_64-linux-gnu/../src/Processors/QueryPlan/ITransformingStep.cpp:44: DB::ITransformingStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipeline, std::__1::default_delete<DB::QueryPipeline> >, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipeline, std::__1::default_delete<DB::QueryPipeline> > > >, DB::BuildQueryPipelineSettings const&) @ 0x1db8e23f in /usr/bin/clickhouse
7. ./obj-x86_64-linux-gnu/../src/Processors/QueryPlan/QueryPlan.cpp:168: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x1dbb50c6 in /usr/bin/clickhouse
8. ./obj-x86_64-linux-gnu/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:313: DB::InterpreterSelectWithUnionQuery::execute() @ 0x1c9a36e5 in /usr/bin/clickhouse
9. ./obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:561: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) @ 0x1cc00559 in /usr/bin/clickhouse
10. ./obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:997: DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, std::__1::shared_ptr<DB::Context>, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>) @ 0x1cc02a2b in /usr/bin/clickhouse
11. ./obj-x86_64-linux-gnu/../src/Server/HTTPHandler.cpp:772: DB::HTTPHandler::processQuery(std::__1::shared_ptr<DB::Context>, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0x1d655089 in /usr/bin/clickhouse
12. ./obj-x86_64-linux-gnu/../src/Server/HTTPHandler.cpp:911: DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x1d656d1f in /usr/bin/clickhouse
13. ./obj-x86_64-linux-gnu/../src/Server/HTTP/HTTPServerConnection.cpp:48: DB::HTTPServerConnection::run() @ 0x1d6fffc2 in /usr/bin/clickhouse
14. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x21428f5c in /usr/bin/clickhouse
15. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x214297e4 in /usr/bin/clickhouse
16. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x21584ac3 in /usr/bin/clickhouse
17. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x2158137d in /usr/bin/clickhouse
18. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x21580108 in /usr/bin/clickhouse
19. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
20. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.6.1.6804)
```
Found by SQLancer https://clickhouse-test-reports.s3.yandex.net/20393/6aa7c0fafcca9c1493e6923ef68592ab68b286da/sqlancer_test.html#fail1 | https://github.com/ClickHouse/ClickHouse/issues/24020 | https://github.com/ClickHouse/ClickHouse/pull/24023 | b0476c1fa219ca98f660d44f254eb4ed9e4fc12e | af78649e561350814a58a925345fe48479e41f52 | "2021-05-11T16:24:41Z" | c++ | "2021-05-20T07:58:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 24,011 | ["src/Interpreters/JoinToSubqueryTransformVisitor.cpp", "src/Parsers/ASTIdentifier.cpp", "tests/queries/0_stateless/01890_cross_join_explain_crash.reference", "tests/queries/0_stateless/01890_cross_join_explain_crash.sql"] | EXPLAIN SYNTAX Causing Segfault v21.3 | ClickHouse v21.3.9.1
```
# clickhouse-client
ClickHouse client version 21.3.9.1.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.3.9 revision 54447.
hostname :)
EXPLAIN SYNTAX
SELECT *
FROM
(
SELECT 1
)
,
(
SELECT 1
)
,
(
SELECT 1
)
Query id: 509c46c9-5f96-4bb7-bdcd-4a3e9169abce
[hostname] 2021.05.11 15:19:22.906838 [ 28452 ] <Fatal> BaseDaemon: ########################################
[hostname] 2021.05.11 15:19:22.907377 [ 28452 ] <Fatal> BaseDaemon: (version 21.3.9.1, build id: 1DD0C685978F2988C19E01ABDC698C78A31B3E5C) (from thread 26179) (query_id: 509c46c9-5f96-4bb7-bdcd-4a3e9169abce) Received signal Segmentation fault (
11)
[hostname] 2021.05.11 15:19:22.907669 [ 28452 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
[hostname] 2021.05.11 15:19:22.907855 [ 28452 ] <Fatal> BaseDaemon: Stack trace: 0x116ac6a1 0x116da613 0x116a349c 0x116bf70e 0x116fffdf 0xeb587c0 0xeb3a4af 0xeb394f1 0xeb06663 0xeb06980 0xeb06980 0xeb06980 0xeb03e27 0xeb036a2 0xf00801b 0xf00671
3 0xf7a5c6d 0xf7b80f9 0x11e9533f 0x11e96d51 0x11fcf2a9 0x11fcb0fa 0x7f462cc0f494 0x7f462c951aff
[hostname] 2021.05.11 15:19:22.908295 [ 28452 ] <Fatal> BaseDaemon: 1. DB::ASTIdentifier::formatImplWithoutAlias(DB::IAST::FormatSettings const&, DB::IAST::FormatState&, DB::IAST::FormatStateStacked) const @ 0x116ac6a1 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.908483 [ 28452 ] <Fatal> BaseDaemon: 2. DB::ASTWithAlias::formatImpl(DB::IAST::FormatSettings const&, DB::IAST::FormatState&, DB::IAST::FormatStateStacked) const @ 0x116da613 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.908694 [ 28452 ] <Fatal> BaseDaemon: 3. DB::ASTExpressionList::formatImpl(DB::IAST::FormatSettings const&, DB::IAST::FormatState&, DB::IAST::FormatStateStacked) const @ 0x116a349c in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.908985 [ 28452 ] <Fatal> BaseDaemon: 4. DB::ASTSelectQuery::formatImpl(DB::IAST::FormatSettings const&, DB::IAST::FormatState&, DB::IAST::FormatStateStacked) const @ 0x116bf70e in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.909307 [ 28452 ] <Fatal> BaseDaemon: 5. DB::IAST::formatForErrorMessage() const @ 0x116fffdf in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.910119 [ 28452 ] <Fatal> BaseDaemon: 6. DB::InDepthNodeVisitor<DB::JoinToSubqueryTransformMatcher, true, std::__1::shared_ptr<DB::IAST> >::visit(std::__1::shared_ptr<DB::IAST>&) @ 0xeb587c0 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.911184 [ 28452 ] <Fatal> BaseDaemon: 7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::o
ptional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_tr
aits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0xeb3a4af in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.912077 [ 28452 ] <Fatal> BaseDaemon: 8. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, DB::Context const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic
_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0xeb394f1 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.912243 [ 28452 ] <Fatal> BaseDaemon: 9. ? @ 0xeb06663 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.912493 [ 28452 ] <Fatal> BaseDaemon: 10. ? @ 0xeb06980 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.912700 [ 28452 ] <Fatal> BaseDaemon: 11. ? @ 0xeb06980 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.912870 [ 28452 ] <Fatal> BaseDaemon: 12. ? @ 0xeb06980 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.913017 [ 28452 ] <Fatal> BaseDaemon: 13. DB::InterpreterExplainQuery::executeImpl() @ 0xeb03e27 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.913295 [ 28452 ] <Fatal> BaseDaemon: 14. DB::InterpreterExplainQuery::execute() @ 0xeb036a2 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.913478 [ 28452 ] <Fatal> BaseDaemon: 15. ? @ 0xf00801b in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.913753 [ 28452 ] <Fatal> BaseDaemon: 16. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0
xf006713 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.914271 [ 28452 ] <Fatal> BaseDaemon: 17. DB::TCPHandler::runImpl() @ 0xf7a5c6d in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.914513 [ 28452 ] <Fatal> BaseDaemon: 18. DB::TCPHandler::run() @ 0xf7b80f9 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.914714 [ 28452 ] <Fatal> BaseDaemon: 19. Poco::Net::TCPServerConnection::start() @ 0x11e9533f in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.914919 [ 28452 ] <Fatal> BaseDaemon: 20. Poco::Net::TCPServerDispatcher::run() @ 0x11e96d51 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.915188 [ 28452 ] <Fatal> BaseDaemon: 21. Poco::PooledThread::run() @ 0x11fcf2a9 in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.915362 [ 28452 ] <Fatal> BaseDaemon: 22. Poco::ThreadImpl::runnableEntry(void*) @ 0x11fcb0fa in /usr/bin/clickhouse
[hostname] 2021.05.11 15:19:22.915607 [ 28452 ] <Fatal> BaseDaemon: 23. start_thread @ 0x7494 in /lib/x86_64-linux-gnu/libpthread-2.24.so
[hostname] 2021.05.11 15:19:22.916040 [ 28452 ] <Fatal> BaseDaemon: 24. clone @ 0xe8aff in /lib/x86_64-linux-gnu/libc-2.24.so
[hostname] 2021.05.11 15:19:31.771616 [ 28452 ] <Fatal> BaseDaemon: Calculated checksum of the binary: 5F45ECE08DC475DC578C3275037D7A34. There is no information about the reference checksum.
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000)
``` | https://github.com/ClickHouse/ClickHouse/issues/24011 | https://github.com/ClickHouse/ClickHouse/pull/25082 | 82b8d45cd71128378b334e0aa4abc756898e688d | a163453e74bd94a1a2619717ba270990f1c61af7 | "2021-05-11T08:24:26Z" | c++ | "2021-06-09T16:40:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,960 | ["src/Functions/array/arrayDifference.cpp", "tests/queries/0_stateless/01851_array_difference_decimal_overflow_ubsan.reference", "tests/queries/0_stateless/01851_array_difference_decimal_overflow_ubsan.sql"] | UBSan: Function arrayDifference decimal overflow | https://clickhouse-test-reports.s3.yandex.net/8482/5b28f7cf439f00d8f8601ba121dd929fcfe63589/fuzzer_ubsan/server.log
```
SELECT arrayDifference([toDecimal32(100.0000991821289, 0), -2147483647]) AS x
```
Result:
```
../src/Core/Types.h:182:113: runtime error: signed integer overflow: -2147483647 - 100 cannot be represented in type 'int'
#0 0x18301e6e in bool DB::ArrayDifferenceImpl::executeType<DB::Decimal<int>, DB::Decimal<int> >(COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, DB::ColumnArray const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&) (/workspace/clickhouse+0x18301e6e)
#1 0x182fd7f4 in DB::ArrayDifferenceImpl::execute(DB::ColumnArray const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>) (/workspace/clickhouse+0x182fd7f4)
#2 0x182fb5a2 in DB::FunctionArrayMapped<DB::ArrayDifferenceImpl, DB::NameArrayDifference>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x182fb5a2)
#3 0x12d0a8f4 in DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x12d0a8f4)
#4 0x12d09f56 in DB::DefaultExecutable::executeDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x12d09f56)
#5 0x137af05a in DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const (/workspace/clickhouse+0x137af05a)
#6 0x137b006d in DB::ExecutableFunctionAdaptor::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const (/workspace/clickhouse+0x137b006d)
#7 0x1aa44925 in DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) obj-x86_64-linux-gnu/../src/Interpreters/ActionsDAG.cpp:210:35
``` | https://github.com/ClickHouse/ClickHouse/issues/23960 | https://github.com/ClickHouse/ClickHouse/pull/23961 | 90f3760c576d1356bed58ad0a5f01a32146d487e | e517436ba489501e95a370912e44b79be02bf7ac | "2021-05-08T14:33:02Z" | c++ | "2021-05-09T11:04:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,938 | ["src/Interpreters/AsynchronousMetricLog.cpp"] | AsynchronousMetricLog: use LowCardinality for names. | https://github.com/ClickHouse/ClickHouse/issues/23938 | https://github.com/ClickHouse/ClickHouse/pull/23981 | 50f8800822177a20253f057dad88e87367720d71 | f2a2f85f63c755a538cca22b792d61a70c6b24e6 | "2021-05-07T16:07:30Z" | c++ | "2021-05-10T07:58:58Z" |
|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,926 | ["src/Storages/StorageMaterializedView.cpp", "tests/queries/0_stateless/01155_rename_move_materialized_view.reference", "tests/queries/0_stateless/01155_rename_move_materialized_view.sql", "tests/queries/skip_list.json"] | Data table for MatView behaves strangely when MV moved from Ordinary to Atomic DB | **Describe the unexpected behaviour**
When MatView is moved from Ordinary DB to Atomic, data table stays in the same DB but renamed. I'd expected it moved to another DB too. Or at least not renamed and moved separately.
**How to reproduce**
* Which ClickHouse server version to use: 21.3.9
* Which interface to use, if matters: doesn't matter
* Full reproduction SQL:
```sql
CREATE DATABASE IF NOT EXISTS ordinary ENGINE=Ordinary;
CREATE DATABASE IF NOT EXISTS atomic ENGINE=Atomic;
CREATE TABLE IF NOT EXISTS ordinary.data (s String) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY s;
CREATE MATERIALIZED VIEW IF NOT EXISTS ordinary.mv (s String) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY s AS SELECT * FROM ordinary.data;
SELECT 'ordinary:';
SHOW TABLES FROM ordinary;
RENAME TABLE ordinary.mv TO atomic.mv;
SELECT 'ordinary after rename:';
SHOW TABLES FROM ordinary;
SELECT 'atomic after rename:';
SHOW TABLES FROM atomic;
```
Output:
```
ordinary:
.inner.mv
data
mv
ordinary after rename:
.inner_id.d9584e69-6200-440f-a82d-3f8e5e18ff5e
data
atomic after rename:
mv
```
**Expected behavior**
Either `.inner.mv` stays as is in Ordinary DB or moved together with `mv` to the Atomic DB | https://github.com/ClickHouse/ClickHouse/issues/23926 | https://github.com/ClickHouse/ClickHouse/pull/24309 | 84e4b19ad38f471507dc1d24d59ce683adf1be80 | acd952d701a951776c58143e848d2d02dd7c128d | "2021-05-06T16:51:05Z" | c++ | "2021-05-20T15:20:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,905 | ["src/Processors/Formats/Impl/BinaryRowInputFormat.cpp", "src/Server/HTTP/HTMLForm.cpp", "src/Server/HTTP/HTMLForm.h", "tests/queries/0_stateless/00304_http_external_data.reference", "tests/queries/0_stateless/00304_http_external_data.sh"] | "Cannot read all data" exception when sending External Data in RowBinary format | **Describe the bug**
Supplying External Data in RowBinary format is broken.
**Does it reproduce on recent release?**
should be reproducible with 21.4.6.55
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
**How to reproduce**
run docker container using fresh ClickHouse image:
`docker run --rm -t -i --name=ch --net=host --ulimit nofile=262144:262144 yandex/clickhouse-server`
Executing a straightforward query fetching a couple of integers from supplied data consisting of 32-bit integers 1 and 2 transferred in RowBinary format as 8 bytes fails complaining about third row even though we've sent only two:
```
echo "0x0: 0100000002000000" | xxd -r - | curl -F "tmp=@-" "http://localhost:8123/?query=select+TaskID+from+tmp+format+JSON&tmp_structure=TaskID+UInt32&tmp_format=RowBinary"
Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 2. Bytes expected: 4.: (at row 3)
: While executing SourceFromInputStream (version 21.4.6.55 (official build))
```
While it should've executed fine returning two rows "TaskID": 1 and "TaskID": 2
Adding two stray bytes at the end of the RowBinary buffer causes error to disappear:
```
echo "0x0: 01000000020000000304" | xxd -r - | curl -F "tmp=@-" "http://localhost:8123/?query=select+TaskID+from+tmp+format+JSON&tmp_structure=TaskID+UInt32&tmp_format=RowBinary"
{
"meta":
[
{
"name": "TaskID",
"type": "UInt32"
}
],
"data":
[
{
"TaskID": 1
},
{
"TaskID": 2
},
{
"TaskID": 168625155
}
],
"rows": 3,
"statistics":
{
"elapsed": 0.000203513,
"rows_read": 3,
"bytes_read": 12
}
}
```
However now query returns a gibberish third value 168625155 which is 03040DOA in hex suggesting that trailing `\r\n` after multipart/form-data value are erroneously treated as part of the value causing parsing of correct RowBinary block to fail. | https://github.com/ClickHouse/ClickHouse/issues/23905 | https://github.com/ClickHouse/ClickHouse/pull/24399 | 678a16b5dc512dfa70e4715be3442d117d336f0d | ce2a809773e9e7b38d4905dad4b1bb2223ec649e | "2021-05-05T21:08:53Z" | c++ | "2021-05-28T11:59:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,901 | ["src/Common/StringUtils/StringUtils.h", "src/Server/HTTP/ReadHeaders.cpp", "tests/queries/0_stateless/01399_http_request_headers.reference", "tests/queries/0_stateless/01399_http_request_headers.sh"] | HTTP request with an empty header value results in bad request response | Clicktail installed from provided .deb package (version 1.0.20180402) generates HTTP requests like:
`POST /?query=INSERT+INTO+clicktail.apache_log+FORMAT+JSONEachRow HTTP/1.1`
`Host: XXX:8123`
`User-Agent: libclick-go/1.4.0 clicktail/dev (regex)`
`Content-Length: 2425`
`Authorization: Basic XXX`
`Content-Encoding: gzip`
`Content-Type: application/json`
`X-Honeycomb-Team: `
`Accept-Encoding: gzip`
Header X-Honeycomb-Team contains only space before terminal characters:
`58 2d 48 6f 6e 65 79 63 6f 6d 62 2d 54 65 61 6d 3a 20 0d 0a`
Clickhouse server 21.4.5.46 doesn't accept those requests and returns:
`HTTP/1.1 400 Bad Request`
There is no requirement in RFC 7230 https://tools.ietf.org/html/rfc7230#section-3.2 for field-value to contains any data. Shouldn't be those requests accepted? | https://github.com/ClickHouse/ClickHouse/issues/23901 | https://github.com/ClickHouse/ClickHouse/pull/24285 | b1d13198945e5ce0d45af3c168e6cf75470d2254 | 5de03390e0e5319f47be4bd6f9556a411ada84ba | "2021-05-05T16:06:44Z" | c++ | "2021-05-21T02:21:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,860 | ["base/daemon/BaseDaemon.cpp", "base/loggers/Loggers.cpp", "contrib/poco", "contrib/poco-cmake/Foundation/CMakeLists.txt", "docker/test/integration/base/Dockerfile", "tests/integration/test_backward_compatibility/test_aggregate_fixed_key.py", "tests/integration/test_log_lz4_streaming/__init__.py", "tests/integration/test_log_lz4_streaming/configs/logs.xml", "tests/integration/test_log_lz4_streaming/test.py"] | RFC: LZ4 compressed clickhouse logs | I really love clickhouse logs. They helped me a lot of times, they 'tell stories' :)
But on limited env (like tiny cloud instances) people often set the main log level from trace to information, or just disable it at all, keeping err log only - and motivation is very simple - logs are quite big, they take disk space & part of disk bandwidth - and that effect is very visible on tiny instances.
Maybe if we would compress them with LZ4 (streaming) it will be fewer reasons to disable them?
What do you think of that? (I'm not totally convinced if it's good, because logs usually are expected to be straight and simple). | https://github.com/ClickHouse/ClickHouse/issues/23860 | https://github.com/ClickHouse/ClickHouse/pull/29219 | f27fcf837206b942a34616631b733c7d64051971 | fc9ef14a736d62f121d27d3a23490ef7198190be | "2021-05-03T10:25:06Z" | c++ | "2021-11-17T18:22:57Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,821 | ["src/Common/IntervalTree.h", "src/Common/examples/CMakeLists.txt", "src/Common/examples/interval_tree.cpp", "src/Common/tests/gtest_interval_tree.cpp", "src/Dictionaries/RangeHashedDictionary.cpp", "src/Dictionaries/RangeHashedDictionary.h", "tests/performance/range_hashed_dictionary.xml", "tests/queries/0_stateless/01676_range_hashed_dictionary.sql", "tests/queries/0_stateless/02008_complex_key_range_hashed_dictionary.sql"] | range_hashed wierd issue with performance. | ```sql
version 21.5.1
drop dictionary if exists curs_dict;
drop table if exists curs;
CREATE TABLE curs
(
`a` UInt64,
`b` UInt64,
`e` UInt64,
`rate` UInt16
)
ENGINE = MergeTree
ORDER BY a
insert into curs
select 1,
toYYYYMMDDhhmmss(toDateTime('2020-01-01 00:00:00')+number*2) ,
toYYYYMMDDhhmmss(toDateTime('2020-01-01 00:00:00')+number*2+1), 1
from numbers(1000000);
CREATE DICTIONARY curs_dict
(
`a` Unt64,
`b` UInt64,
`e` UInt64,
`rate` UInt16
)
PRIMARY KEY a
SOURCE(CLICKHOUSE(TABLE curs DB 'default' USER 'default'))
LIFETIME(MIN 0 MAX 10)
LAYOUT(RANGE_HASHED)
RANGE(MIN b MAX e);
0 rows in set. Elapsed: 115.715 sec.
SELECT sum(dictGetUInt16('curs_dict', 'rate', toUInt64(1), toYYYYMMDDhhmmss((toDateTime('2020-01-01 00:00:00') + (number * 2)) + 1))) AS s
FROM numbers(10000);
Elapsed: 0.023 sec.
SELECT sum(dictGetUInt16('curs_dict', 'rate', toUInt64(1), toYYYYMMDDhhmmss((toDateTime('2020-01-01 00:00:00') + (number * 2)) + 1))) AS s
FROM numbers(100000)
Elapsed: 2.264 sec. / Expected 0.023*10 ~ 0.230
SELECT sum(dictGetUInt16('curs_dict', 'rate', toUInt64(1), toYYYYMMDDhhmmss((toDateTime('2020-01-01 00:00:00') + (rand())) + 1))) AS s
FROM numbers(100000)
Hangs forever
``` | https://github.com/ClickHouse/ClickHouse/issues/23821 | https://github.com/ClickHouse/ClickHouse/pull/33516 | 6e3287038e62ef2fc405a47ad8c7575303ca283c | 41a6cd54aac9fdb9150d502fb2e7c0345974850a | "2021-04-30T19:44:49Z" | c++ | "2022-01-19T15:30:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,804 | ["tests/queries/0_stateless/02691_multiple_joins_backtick_identifiers.reference", "tests/queries/0_stateless/02691_multiple_joins_backtick_identifiers.sql"] | Missing column in multiple join with backticked identifiers | **Describe the bug**
The following example fails:
```sql
CREATE TABLE t1 (`1a` Nullable(Int64), `2b` Nullable(String)) engine = Memory;
CREATE TABLE t2 (`3c` Nullable(Int64), `4d` Nullable(String)) engine = Memory;
CREATE TABLE t3 (`5e` Nullable(Int64), `6f` Nullable(String)) engine = Memory;
SELECT
`1a`,
`2b`
FROM default.t1 AS tt1
INNER JOIN
(
SELECT `3c`
FROM default.t2
) AS tt2 ON tt1.`1a` = tt2.`3c`
INNER JOIN
(
SELECT `6f`
FROM default.t3
) AS tt3 ON tt1.`2b` = tt3.`6f`;
Received exception from server (version 21.6.1):
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: '3c' while processing query: 'SELECT `2b`, `1a`, `3c` FROM default.t1 AS tt1 ALL INNER JOIN (SELECT `3c` FROM default.t2) AS tt2 ON `1a` = `3c`', required columns: '2b' '1a' '3c', maybe you meant: ['2b','1a'], joined columns: 'tt2.3c'.
```
Since identifiers start from digit, they are wrapped into back ticks and then are not found during join rewriting. Such identifiers are auto-generated as some hex hash. Example with regular ascii identifiers succeeds:
```sql
CREATE TABLE t1 (`a` Nullable(Int64), `b` Nullable(String)) engine = Memory;
CREATE TABLE t2 (`c` Nullable(Int64), `d` Nullable(String)) engine = Memory;
CREATE TABLE t3 (`e` Nullable(Int64), `f` Nullable(String)) engine = Memory;
SELECT
a,
b
FROM default.t1 AS tt1
INNER JOIN
(
SELECT c
FROM default.t2
) AS tt2 ON tt1.a = tt2.c
INNER JOIN
(
SELECT f
FROM default.t3
) AS tt3 ON tt1.b = tt3.f
Ok.
```
**Does it reproduce on recent release?**
Reproduces on master.
| https://github.com/ClickHouse/ClickHouse/issues/23804 | https://github.com/ClickHouse/ClickHouse/pull/47737 | 50e1eedd4766ed8c2e34339d78f000d63f4d5191 | 72c6084267da929019068fae3d78741b8225efcf | "2021-04-30T14:19:54Z" | c++ | "2023-03-23T14:49:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,800 | ["src/Dictionaries/DictionaryHelpers.h", "src/Dictionaries/HashedDictionary.cpp", "tests/integration/test_dictionaries_update_field/__init__.py", "tests/integration/test_dictionaries_update_field/configs/config.xml", "tests/integration/test_dictionaries_update_field/configs/users.xml", "tests/integration/test_dictionaries_update_field/test.py"] | 21.4.4 complex_key_hashed dictionaries with update_field cannot load | I upgraded from 21.4.3 to 21.4.4 and my `complex_key_hashed` dictionaries with `update_field` cannot load anymore
```
2021.04.30 13:13:01.913021 [ 7023 ] {} <Error> ExternalDictionariesLoader: Could not load external dictionary 'complex_key_hashed_dict', next update is scheduled at 2021-04-30 13:14:06: std::exception. Code: 1001, type: std::length_error, e.what() = basic_string, Stack trace (when copying this message, always include the lines below):
0. ? @ 0x891bc99 in /usr/bin/clickhouse
1. ? @ 0x891bc60 in /usr/bin/clickhouse
2. ? @ 0x1435753b in ?
3. ? @ 0x1435871c in ?
4. DB::Field::operator=(std::__1::basic_string_view<char, std::__1::char_traits<char> > const&) @ 0xf997726 in /usr/bin/clickhouse
5. DB::ColumnString::get(unsigned long, DB::Field&) const @ 0xfa7d1d2 in /usr/bin/clickhouse
6. auto DB::HashedDictionary<(DB::DictionaryKeyType)1, false>::blockToAttributes(DB::Block const&)::'lambda'(auto&)::operator()<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, StringRef, DefaultHash<StringRef, void>, HashTableNoState>, DefaultHash<StringRef, void>, HashTableGrower<8ul>, Allocator<true, true> > >(auto&) const @ 0xda21c0d in /usr/bin/clickhouse
7. DB::HashedDictionary<(DB::DictionaryKeyType)1, false>::blockToAttributes(DB::Block const&) @ 0xd91f299 in /usr/bin/clickhouse
8. DB::HashedDictionary<(DB::DictionaryKeyType)1, false>::updateData() @ 0xd9211a4 in /usr/bin/clickhouse
9. DB::HashedDictionary<(DB::DictionaryKeyType)1, false>::loadData() @ 0xd91dd5a in /usr/bin/clickhouse
10. DB::HashedDictionary<(DB::DictionaryKeyType)1, false>::HashedDictionary(DB::StorageID const&, DB::DictionaryStructure const&, std::__1::unique_ptr<DB::IDictionarySource, std::__1::default_delete<DB::IDictionarySource> >, DB::ExternalLoadableLifetime, bool, std::__1::shared_ptr<DB::Block>) @ 0xd91dad9 in /usr/bin/clickhouse
11. ? @ 0xda26b8d in /usr/bin/clickhouse
12. ? @ 0xda26ecc in /usr/bin/clickhouse
13. DB::DictionaryFactory::create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context const&, bool) const @ 0xee13c28 in /usr/bin/clickhouse
14. DB::ExternalDictionariesLoader::create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf40bea1 in /usr/bin/clickhouse
15. DB::ExternalLoader::LoadingDispatcher::loadSingleObject(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ExternalLoader::ObjectConfig const&, std::__1::shared_ptr<DB::IExternalLoadable const>) @ 0xf417de7 in /usr/bin/clickhouse
16. DB::ExternalLoader::LoadingDispatcher::doLoading(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, bool, unsigned long, bool) @ 0xf415249 in /usr/bin/clickhouse
17. ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::ExternalLoader::LoadingDispatcher::*)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, bool, unsigned long, bool), DB::ExternalLoader::LoadingDispatcher*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, unsigned long&, bool&, unsigned long&, bool>(void (DB::ExternalLoader::LoadingDispatcher::*&&)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, bool, unsigned long, bool), DB::ExternalLoader::LoadingDispatcher*&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, unsigned long&, bool&, unsigned long&, bool&&)::'lambda'()::operator()() @ 0xf41a421 in /usr/bin/clickhouse
18. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8954fef in /usr/bin/clickhouse
19. ? @ 0x8958a83 in /usr/bin/clickhouse
20. start_thread @ 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
21. __clone @ 0xe8d0f in /lib/x86_64-linux-gnu/libc-2.24.so
(version 21.4.4.30 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/23800 | https://github.com/ClickHouse/ClickHouse/pull/23824 | 264b30d73869fc010590965fcd3f1f33bdfa48c6 | a979a86930f5125320ff6174481e557f6d54d562 | "2021-04-30T13:17:46Z" | c++ | "2021-05-06T14:12:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,792 | ["docs/en/sql-reference/functions/encoding-functions.md", "src/Functions/FunctionsCoding.cpp", "src/Functions/FunctionsCoding.h", "tests/queries/0_stateless/01866_bit_positions_to_array.reference", "tests/queries/0_stateless/01866_bit_positions_to_array.sql"] | Function to get position of non zero bits. | **Use case**
Convert bitmask to position.
It would allow us to directly use FLAT dictionary or Enum datatype without jumping around slow `bitmasktoArray` and `arrayMap`
**Describe the solution you'd like**
```
SELECT
arrayMap(x -> log2(x), bitmaskToArray(100)) AS bitmask,
bitPositions(100) AS bitpositions
┌─bitmask─┬─bitpositions─┐
│ [2,5,6] │ [2,5,6] │
└─────────┴──────────────┘
```
| https://github.com/ClickHouse/ClickHouse/issues/23792 | https://github.com/ClickHouse/ClickHouse/pull/25394 | 2c4c2680f706bf6a652ae9d5df0be4f45091fb34 | b34b66c55d5b05160da3447bf9dc53237ae2ad21 | "2021-04-30T09:15:45Z" | c++ | "2021-06-17T22:31:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,775 | ["src/IO/S3/PocoHTTPClient.cpp", "tests/integration/test_merge_tree_s3_failover/s3_endpoint/endpoint.py", "tests/integration/test_storage_s3/s3_mocks/mock_s3.py", "tests/integration/test_storage_s3/test.py"] | S3 table endpoint region configuration constrained to the amazonaws domain | Hi, first of all thank you for a great piece of software.
**Describe the bug**
PR #15646 fixes #10417 by matching the S3 endpoint region using the following regular expression: `s3.<region>.amazonaws`. Attempting to use an S3 compatible storage engine hosted outside of AWS an authentication error due to mismatch with the default region (`us-east-1`).
**Does it reproduce on recent release?**
Reproduces on `ClickHouse server version 21.4.5.46 (official build)`
**How to reproduce?**
The following query fails:
`SELECT * from s3('https://s3.gra.cloud.ovh.net/..., 'key', 'secret', 'csv', '...') LIMIT 10`
**Expected behavior**
One potential solution would be to allow the region to be specified in the S3 disk settings (adding vendor-specific regexp patterns in `PocoHTTPClientConfiguration::updateSchemeAndRegion` seems unrealistic). | https://github.com/ClickHouse/ClickHouse/issues/23775 | https://github.com/ClickHouse/ClickHouse/pull/23844 | 780b7cc8e1c11ff729dad974c1cc7b47f3dbd89d | d340e33e2b373bb42e28ec10ddca38dd7e33b869 | "2021-04-29T18:51:36Z" | c++ | "2021-05-13T16:21:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,764 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/StorageBuffer.cpp", "src/Storages/StorageDistributed.cpp", "src/Storages/StorageMerge.cpp", "src/Storages/StorageNull.cpp", "tests/queries/0_stateless/01851_clear_column_referenced_by_mv.reference", "tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql"] | CLEAR COLUMN doesn't work for columns referenced by materialized view | Looks like https://github.com/ClickHouse/ClickHouse/pull/21303 is broke also "clear column" for such columns.
How to reproduce:
```
CREATE TABLE test.TTAdmLog
(
`ts` DateTime,
`event_date` Date DEFAULT toDate(ts),
`impression_id` String DEFAULT '',
`creative_id` String DEFAULT '',
`creative_adm` String DEFAULT '',
`impression_id_compressed` FixedString(16) DEFAULT UUIDStringToNum(impression_id),
`impression_id_hashed` UInt16 DEFAULT reinterpretAsUInt16(impression_id_compressed),
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/TTAdmLogTest', '{replica}')
PARTITION BY toMonday(event_date)
ORDER BY (event_date, impression_id_hashed)
SAMPLE BY impression_id_hashed
SETTINGS replicated_deduplication_window = 1000, replicated_deduplication_window_seconds = 864000, index_granularity = 8192
CREATE MATERIALIZED VIEW test.TTView
(
`event_date` Date,
`creative_id` String,
`creative_adm` AggregateFunction(anyLast, String),
)
ENGINE = ReplicatedAggregatingMergeTree('/clickhouse/tables/{shard}/TTViewTest', '{replica}', event_date, (event_date, creative_id), 8192) AS
SELECT
toStartOfMonth(event_date) AS event_date,
arrayJoin(splitByChar(',', creative_id)) AS creative_id,
anyLastState(creative_adm) AS creative_adm,
FROM test.TTAdmLog
GROUP BY
event_date,
creative_id,
```
ClickHouse server version 20.9.3 revision 54439.
```
alter table test.TTAdmLog clear column creative_adm in partition '2021-03-22'
OK
```
ClickHouse server version 21.4.3 revision 54447.
```
alter table test.TTAdmLog clear column creative_adm in partition '2021-03-22'
DB::Exception: Trying to ALTER DROP column creative_adm which is referenced by materialized view ['TTView'].
```
| https://github.com/ClickHouse/ClickHouse/issues/23764 | https://github.com/ClickHouse/ClickHouse/pull/23781 | 4bb56849b37c625d4810e8ccbc5b7e1b55858088 | 14e879cef98915f20224bdfeae0e519b2d5620ff | "2021-04-29T13:41:07Z" | c++ | "2021-04-30T14:28:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,682 | ["src/Functions/initialQueryID.cpp", "src/Functions/queryID.cpp", "src/Functions/registerFunctionsMiscellaneous.cpp", "tests/queries/0_stateless/01943_query_id_check.reference", "tests/queries/0_stateless/01943_query_id_check.sql", "tests/queries/skip_list.json"] | Create functions queryID and initialQueryID | Goal is to find out (initial_)query_id of INSERT query that triggered this (current) Materialized View execution, in order to register this initial_query_id in statistics table.
Why? It would enrich statistics/monitoring/troubleshooting and provide significant information both about origins of insert to ClickHouse and about propagation of insert through ClickHouse derived objects (e.g. in-ClickHouse latency).
The point is that system.query_log with all its useful information knows nothing about data.
At the same time manual statistics table(s) that are filled by adjacent MV potentially know everything about incoming data, but lack information about initial IP's, queries etc.
These two statistic's sources are crying to be linked together.
This use case and this desire look general enough, so there is probably some general solution.
+++++++++++++++++++++++++++
Details
+++++++++++++++++++++++++++
Input data stream enters my ClickHouse cluster through input Null table (on every node).
Several Materialized Views over this Null table are performing some processing and propagating data further (there are distributed tables and more MV's there).
In order to reduce in-ClickHouse latency: parallel_view_processing=1
One of derived tables (stat2.inserts_v02) contains statistics about input flow (count(), quantiles, delays) over some data-related dimensions.
It is basic ReplicatedMergeTree that is filled via MV from inserts to initial Null table data2.input_null like:
`CREATE MATERIALIZED VIEW data2.inserts_v02_MVst2 ON CLUSTER ch_sNr1 TO stat2.inserts_v02
AS SELECT
<data dimensions>,
hostName(),
count(), median(), etc
FROM data2.input_null GROUP BY ... ORDER BY ...`
I'm trying to somehow find out/evaluate (initial_)query_id of parent's insert inside Materialized View's query.
It would be very helpful for monitoring/troubleshooting/statistics.
If query_id or initial_query_id were known, it would have been possible to obtain a lot of additional information from system.query_log tables - both about origins of ETL data flow (IP addresses, http_user_agent) and about further destiny of this insert (whole picture of latencies within ClickHouse, additional checks that everything works as intended).
But all my attempts failed.
There are seemingly no inherent functions to receive initial_query_id that triggered MV during MV execution (however modern ClickHouse versions do display query's query_id in clickhouse-client).
I tried to fill initial_query_id via lookup into system.processes table, but the problem is that these are large inserts (500k-1M events) that are executed several seconds, and there are usually many simultaneous inserts of such kind.
So Materialized View (when it is executed) can't find out which of existsing system.processes INSERT queries is its parent.
`(SELECT arrayStringConcat(groupArray(initial_query_id), ',') FROM system.processes WHERE substr(query,1,37)='INSERT INTO data2.stage2_null_events') AS possible_query_ids,` | https://github.com/ClickHouse/ClickHouse/issues/23682 | https://github.com/ClickHouse/ClickHouse/pull/26410 | d72cc7c7a8ae997692fbc6dc0b4899fe802803f2 | 3f5016a61723d7b2956750af84c0f3ea672b646d | "2021-04-27T10:05:24Z" | c++ | "2021-07-22T15:28:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,661 | ["src/Storages/MergeTree/MergeTreeData.cpp"] | 01079_parallel_alter_add_drop_column_zookeeper.sh: Logical error: Part doesn't exist. | **Describe the bug**
```
5828589:2021.04.27 03:13:08.824665 [ 6101 ] {} <Trace> test_27.concurrent_alter_add_drop_3 (825021f7-129f-4841-ad6b-92215e3f1a94): Renaming temporary part tmp_merge_all_0_4_7_17 to all_0_4_7_17.
5828590:2021.04.27 03:13:08.824778 [ 3358 ] {} <Trace> test_27.concurrent_alter_add_drop_2 (f0a064d0-dc89-4710-9781-a49bb0c16461): Renaming temporary part tmp_merge_all_0_4_7_17 to all_0_4_7_17.
5828591:2021.04.27 03:13:08.824823 [ 6101 ] {} <Warning> test_27.concurrent_alter_add_drop_3 (825021f7-129f-4841-ad6b-92215e3f1a94): Tried to add obsolete part all_0_4_7_17 covered by all_0_4_13_34 (state Committed)
5828623:2021.04.27 03:13:08.824951 [ 3358 ] {} <Warning> test_27.concurrent_alter_add_drop_2 (f0a064d0-dc89-4710-9781-a49bb0c16461): Tried to add obsolete part all_0_4_7_17 covered by all_0_4_13_34 (state Committed)
5828624:2021.04.27 03:13:08.824995 [ 6101 ] {} <Warning> test_27.concurrent_alter_add_drop_3 (825021f7-129f-4841-ad6b-92215e3f1a94) (MergerMutator): Unexpected number of parts removed when adding all_0_4_7_17: 0 instead of 1
5828625:2021.04.27 03:13:08.825147 [ 3358 ] {} <Warning> test_27.concurrent_alter_add_drop_2 (f0a064d0-dc89-4710-9781-a49bb0c16461) (MergerMutator): Unexpected number of parts removed when adding all_0_4_7_17: 0 instead of 1
5828643:2021.04.27 03:13:08.831273 [ 3358 ] {} <Trace> test_27.concurrent_alter_add_drop_2 (f0a064d0-dc89-4710-9781-a49bb0c16461): Trying to immediately remove part all_0_4_7_17 (state Temporary)
5828644:2021.04.27 03:13:08.831658 [ 3358 ] {} <Fatal> : Logical error: 'Part all_0_4_7_17 doesn't exist'.
5828648:2021.04.27 03:13:08.832767 [ 6101 ] {} <Trace> test_27.concurrent_alter_add_drop_3 (825021f7-129f-4841-ad6b-92215e3f1a94): Trying to immediately remove part all_0_4_7_17 (state Temporary)
5828649:2021.04.27 03:13:08.833210 [ 6101 ] {} <Fatal> : Logical error: 'Part all_0_4_7_17 doesn't exist'.
```
https://clickhouse-test-reports.s3.yandex.net/0/9bb4d8769f1064ada4cce6b26d8ff61c11356965/stress_test_(debug).html#fail1 | https://github.com/ClickHouse/ClickHouse/issues/23661 | https://github.com/ClickHouse/ClickHouse/pull/28221 | af709ab9a0e3d67b5bc771aad9e3c2370bed6175 | 531079c452e638ef1475dcb9f1a0d69377008028 | "2021-04-27T03:47:52Z" | c++ | "2021-08-30T09:39:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,648 | ["tests/queries/0_stateless/02011_http_parsing.reference", "tests/queries/0_stateless/02011_http_parsing.sh"] | External data still broken after #22527 | **Describe the bug**
After #22527 the post of external data still broken when there is only one line. When there are more then one, `\r` is appended to the latest.
**Does it reproduce on recent release?**
It's reproducible in 21.4.5.46
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
**How to reproduce**
```
docker run --rm --net=host --name=clickhouse yandex/clickhouse-server:21.4.5.46
```
```
$ printf 'string\nstring\n' | curl -F 'metrics_list=@-;' 'http://localhost:8123/?metrics_list_format=TSV&metrics_list_structure=Path+String&query=SELECT+*+FROM+metrics_list'
string
string
$ printf 'string\nstring' | curl -F 'metrics_list=@-;' 'http://localhost:8123/?metrics_list_format=TSV&metrics_list_structure=Path+String&query=SELECT+*+FROM+metrics_list'
string
string\r
$ printf 'string\n' | curl -F 'metrics_list=@-;' 'http://localhost:8123/?metrics_list_format=TSV&metrics_list_structure=Path+String&query=SELECT+*+FROM+metrics_list'
string
$ printf 'string' | curl -F 'metrics_list=@-;' 'http://localhost:8123/?metrics_list_format=TSV&metrics_list_structure=Path+String&query=SELECT+*+FROM+metrics_list'
Code: 117, e.displayText() = DB::Exception:
You have carriage return (\r, 0x0D, ASCII 13) at end of first row.
It's like your input data has DOS/Windows style line separators, that are illegal in TabSeparated format. You must transform your file to Unix format.
But if you really need carriage return at end of string value of last column, you need to escape it as \r.: (at row 1)
Row 1:
Column 0, name: Path, type: String, parsed text: "string<CARRIAGE RETURN>"
: While executing SourceFromInputStream (version 21.4.5.46 (official build))
```
**Expected behavior**
I expect the first two and the last two commands would return the same output. | https://github.com/ClickHouse/ClickHouse/issues/23648 | https://github.com/ClickHouse/ClickHouse/pull/27762 | 32ee8618b7dfee549937f665b59385421ed38c4a | f04cdf58c09569294893ad1993a00fc909a4a808 | "2021-04-26T14:51:27Z" | c++ | "2021-08-17T18:36:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,515 | ["src/Interpreters/ExpressionActions.cpp", "src/Storages/MergeTree/MergeTreeRangeReader.cpp", "tests/queries/0_stateless/02003_bug_from_23515.reference", "tests/queries/0_stateless/02003_bug_from_23515.sql"] | Expected ColumnLowCardinality, got UInt8: While executing MergingSortedTransform | **Describe the bug**
We encounter the following problem when performing some SELECT queries to a Distributed table with LowCardinality columns:
```DB::Exception: Expected ColumnLowCardinality, gotUInt8: While executing MergingSortedTransform```
**Does it reproduce on recent release?**
Reproduces on the second recent release.
**How to reproduce**
* Which ClickHouse server version to use
21.3.4.25
* Non-default settings, if any
```
use_uncompressed_cache,0
replication_alter_partitions_sync,2
load_balancing,random
log_queries,1
max_result_bytes,32212254720
max_execution_time,600
readonly,2
max_memory_usage,0
max_memory_usage_for_user,32212254720
```
* `CREATE TABLE` statements for all tables involved
```
create table tablename
(
column1 DateTime64(9),
column2 LowCardinality(String),
column3 LowCardinality(String),
column4 LowCardinality(String),
column5 Array(UInt64),
column6 Array(Decimal(18, 9)),
column7 Array(Decimal(18, 6)),
column8 Array(UInt64),
column9 Array(Decimal(18, 9)),
column10 Array(Decimal(18, 6)),
column11 UInt8,
column12 Nullable(DateTime64(9)),
column13 Array(LowCardinality(String)) default [],
column14 Array(LowCardinality(String)) default []
)
engine = Distributed('some_cluster', 'default', 'tablename_partial', halfMD5(toStartOfHour(column1), column2));
create table tablename_partial
(
column1 DateTime64(9),
column2 LowCardinality(String),
column3 LowCardinality(String),
column4 LowCardinality(String),
column5 Array(UInt64),
column6 Array(Decimal(18, 9)),
column7 Array(Decimal(18, 6)),
column8 Array(UInt64),
column9 Array(Decimal(18, 9)),
column10 Array(Decimal(18, 6)),
column11 UInt8,
column12 Nullable(DateTime64(9)),
column13 Array(LowCardinality(String)) default [],
column14 Array(LowCardinality(String)) default []
)
engine = ReplicatedMergeTree('/clickhouse/tables/{shard}/tablename_partial', '{replica}')
PARTITION BY (column4, toDate(column1))
ORDER BY (column2, column1, column3)
SETTINGS index_granularity = 8192;
```
* Queries to run that lead to unexpected result
```sql
SELECT
*
FROM tablename
WHERE column1 >= toDateTime64('2020-05-01 00:00:00.000000000', 9)
AND column1 < toDateTime64('2020-06-01 00:00:00.000000000', 9)
AND column3 = 'value1'
AND column2 = 'value2'
ORDER BY column2, column1, column3
LIMIT 10000
SETTINGS optimize_read_in_order=1;
```
**Expected behavior**
Expected the query to work, not to throw an exception.
**Error message and/or stacktrace**
```
DB::Exception: Expected ColumnLowCardinality, gotUInt8: While executing MergingSortedTransform, Stack trace (when copying this message, always include the lines below):
0. DB::ColumnLowCardinality::insertFrom(DB::IColumn const&, unsigned long) @ 0xf1e7da7 in /usr/bin/clickhouse
1. DB::IMergingAlgorithm::Status DB::MergingSortedAlgorithm::mergeImpl<DB::SortingHeap<DB::SortCursor> >(DB::SortingHeap<DB::SortCursor>&) @ 0xfac3beb in /usr/bin/clickhouse
2. DB::IMergingTransform<DB::MergingSortedAlgorithm>::work() @ 0xf63e0bd in /usr/bin/clickhouse
3. ? @ 0xf9391ed in /usr/bin/clickhouse
4. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xf935e11 in /usr/bin/clickhouse
5. ? @ 0xf93a9e6 in /usr/bin/clickhouse
6. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x865165f in /usr/bin/clickhouse
7. ? @ 0x86550f3 in /usr/bin/clickhouse
8. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
9. __clone @ 0xfe96d in /usr/lib64/libc-2.17.so
(version 21.3.4.25 (official build))
```
**Additional context**
##### Workaround
Explicitly `cast` the LowCardinality columns to their base type.
Also, the problem doesn't reproduce
- when using non-distributed table,
- or when disabling the option `optimize_read_in_order`,
- or randomly when you change query params (column1). | https://github.com/ClickHouse/ClickHouse/issues/23515 | https://github.com/ClickHouse/ClickHouse/pull/27298 | d245eb1705018d1413c0db6985f47ea348c10c6d | bb4c11cd27d463a92bb4458b7df270db7d84c78f | "2021-04-22T17:03:43Z" | c++ | "2021-08-09T20:25:30Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,503 | ["tests/queries/0_stateless/02125_many_mutations.reference", "tests/queries/0_stateless/02125_many_mutations.sh"] | In case of extremely large number of mutations, Segmentation fault (stack overflow) in DB::MergeTreeDataMergerMutator::mutateAllPartColumns | **Describe the bug**
The service is crashing when applying a mutation. The server currently has 20k+ mutations and I'm not sure which one is the culprit.
```
SELECT count(*) FROM system.mutations
SELECT count(*)
FROM system.mutations
┌─count()─┐
│ 20726 │
└─────────┘
1 rows in set. Elapsed: 11.701 sec. Processed 20.73 thousand rows, 107.94 MB (1.77 thousand rows/s., 9.22 MB/s.)
```
**Does it reproduce on recent release?**
It was happening in 20.7 but we didn't check the backtrace at that moment. Since it's a test machine we cleanup up everything, updated to 20.8.16.20 and it stopped happening, but it started happening again after a recent restart of the service; it wasn't crashing before that but the service seemed stuck and it was restarted manually (systemctl restart clickhouse-server).
**How to reproduce**
It is a background process so I don't know how the steps to reproduce it, but it always crashes 5-10 minutes
after restart.
**Expected behavior**
The service shouldn't crash, either the mutation is applied or rejected with an exception.
**Error message and/or stacktrace**
It seems that the stack is running out of memory as it's crashing in frame **#69408** after a huge call stack going from `DB::IBlockInputStream::read` to `DB::MaterializingBlockInputStream::readImpl` and again to ``DB::IBlockInputStream::read`.
Partial callstack (removing many intermediate steps, which are repeated:
```
(gdb) bt 10
#0 0x000000001556418b in clock_gettime_ns (clock_type=6) at ../src/Common/Stopwatch.h:44
#1 Stopwatch::nanoseconds (this=0x7fe944f050f8) at ../src/Common/Stopwatch.h:44
#2 Stopwatch::start (this=0x7fe944f050f8) at ../src/Common/Stopwatch.h:28
#3 DB::IBlockInputStream::read (this=0x7fe944f05020) at ../src/DataStreams/IBlockInputStream.cpp:44
#4 0x0000000015bf09af in DB::ExpressionBlockInputStream::readImpl (this=0x7fe944f05320) at ../contrib/libcxx/include/memory:3826
#5 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe944f05320) at ../src/DataStreams/IBlockInputStream.cpp:57
#6 0x0000000015bf09af in DB::ExpressionBlockInputStream::readImpl (this=0x7fe944f05620) at ../contrib/libcxx/include/memory:3826
#7 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe944f05620) at ../src/DataStreams/IBlockInputStream.cpp:57
#8 0x0000000015bf09af in DB::ExpressionBlockInputStream::readImpl (this=0x7fe944f05920) at ../contrib/libcxx/include/memory:3826
#9 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe944f05920) at ../src/DataStreams/IBlockInputStream.cpp:57
(More stack frames follow...)
```
```
(gdb) bt -30
#69379 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe96a3b4b20) at ../src/DataStreams/IBlockInputStream.cpp:57
#69380 0x0000000015980b3f in DB::MaterializingBlockInputStream::readImpl (this=<optimized out>) at ../contrib/libcxx/include/memory:3826
#69381 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe96a3b4e20) at ../src/DataStreams/IBlockInputStream.cpp:57
#69382 0x0000000015bf4fef in DB::CheckSortedBlockInputStream::readImpl (this=0x7fe96a3b6e20) at ../contrib/libcxx/include/memory:3826
#69383 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe96a3b6e20) at ../src/DataStreams/IBlockInputStream.cpp:57
#69384 0x0000000015bf09af in DB::ExpressionBlockInputStream::readImpl (this=0x7fe96a3b5420) at ../contrib/libcxx/include/memory:3826
#69385 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe96a3b5420) at ../src/DataStreams/IBlockInputStream.cpp:57
#69386 0x0000000015980b3f in DB::MaterializingBlockInputStream::readImpl (this=<optimized out>) at ../contrib/libcxx/include/memory:3826
#69387 0x00000000155642dd in DB::IBlockInputStream::read (this=0x7fe96a3b5720) at ../src/DataStreams/IBlockInputStream.cpp:57
#69388 0x0000000016133cb3 in DB::MergeTreeDataMergerMutator::mutateAllPartColumns (this=this@entry=0x7febf4438bd8, new_data_part=..., metadata_snapshot=..., skip_indices=..., mutating_stream=..., time_of_mutation=time_of_mutation@entry=1619083974,
compression_codec=..., merge_entry=..., need_remove_expired_values=false) at ../contrib/libcxx/include/memory:3826
#69389 0x0000000016135724 in DB::MergeTreeDataMergerMutator::mutatePartToTemporaryPart (this=this@entry=0x7febf4438bd8, future_part=..., metadata_snapshot=..., commands=..., merge_entry=..., time_of_mutation=1619083974, context=...,
space_reservation=...) at ../contrib/libcxx/include/memory:3474
#69390 0x0000000015f0eaee in DB::StorageMergeTree::tryMutatePart (this=this@entry=0x7febf4438700) at ../contrib/libcxx/include/memory:2582
#69391 0x0000000015f0f9ca in DB::StorageMergeTree::mergeMutateTask (this=0x7febf4438700) at ../src/Storages/StorageMergeTree.cpp:921
#69392 0x000000001609cc33 in std::__1::__function::__value_func<DB::BackgroundProcessingPoolTaskResult ()>::operator()() const (this=<optimized out>) at ../contrib/libcxx/include/functional:2471
#69393 std::__1::function<DB::BackgroundProcessingPoolTaskResult ()>::operator()() const (this=<optimized out>) at ../contrib/libcxx/include/functional:2473
#69394 DB::BackgroundProcessingPool::workLoopFunc (this=0x7fecf545aa18) at ../src/Storages/MergeTree/BackgroundProcessingPool.cpp:202
#69395 0x000000001609d572 in DB::BackgroundProcessingPool::<lambda()>::operator() (__closure=0x7febf3744098) at ../src/Storages/MergeTree/BackgroundProcessingPool.cpp:53
#69396 std::__1::__invoke_constexpr<const DB::BackgroundProcessingPool::BackgroundProcessingPool(int, const DB::BackgroundProcessingPool::PoolSettings&, char const*, char const*)::<lambda()>&> (__f=...) at ../contrib/libcxx/include/type_traits:3525
#69397 std::__1::__apply_tuple_impl<const DB::BackgroundProcessingPool::BackgroundProcessingPool(int, const DB::BackgroundProcessingPool::PoolSettings&, char const*, char const*)::<lambda()>&, const std::__1::tuple<>&> (__t=..., __f=...)
at ../contrib/libcxx/include/tuple:1415
#69398 std::__1::apply<const DB::BackgroundProcessingPool::BackgroundProcessingPool(int, const DB::BackgroundProcessingPool::PoolSettings&, char const*, char const*)::<lambda()>&, const std::__1::tuple<>&> (__t=..., __f=...)
at ../contrib/libcxx/include/tuple:1424
#69399 ThreadFromGlobalPool::<lambda()>::operator()(void) const (this=0x7febf3744088) at ../src/Common/ThreadPool.h:172
#69400 0x000000000e662ba7 in std::__1::__function::__value_func<void ()>::operator()() const (this=0x7febf0df4e20) at ../contrib/libcxx/include/functional:2471
#69401 std::__1::function<void ()>::operator()() const (this=0x7febf0df4e20) at ../contrib/libcxx/include/functional:2473
#69402 ThreadPoolImpl<std::__1::thread>::worker (this=0x7fecf5454300, thread_it=...) at ../src/Common/ThreadPool.cpp:243
#69403 0x000000000e661093 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}::operator()() const (this=<optimized out>, this=<optimized out>)
at ../src/Common/ThreadPool.cpp:124
#69404 std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}>(void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}&&, (void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}&&)...) (__f=...)
at ../contrib/libcxx/include/type_traits:3519
#69405 std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}>&, std::__1::__tuple_indices<>) (__t=...) at ../contrib/libcxx/include/thread:273
#69406 std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}> >(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::{lambda()#3}>) (__vp=0x7fec08a0e700) at ../contrib/libcxx/include/thread:284
#69407 0x00007fecf6d446db in start_thread (arg=0x7febf0dfb700) at pthread_create.c:463
#69408 0x00007fecf666171f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```
I don't see anything interesting in the last log messages before crashing:
Logs:
```
2021.04.22 10:41:12.636191 [ 7445 ] {} <Debug> MemoryTracker: Peak memory usage (for query): 48.37 MiB.
2021.04.22 10:41:13.521550 [ 7444 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: GET, Address: 10.156.0.2:41368, User-Agent: none, Content Type: , Transfer Encoding: identity
2021.04.22 10:41:15.034408 [ 7369 ] {} <Debug> MemoryTracker: Current memory usage (total): 21.00 GiB.
2021.04.22 10:41:15.284308 [ 7376 ] {} <Trace> SystemLog (system.part_log): Flushing system log, 6 entries to flush
2021.04.22 10:41:15.284696 [ 7376 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 34.20 GiB.
2021.04.22 10:41:15.285337 [ 7376 ] {} <Trace> system.part_log: Renaming temporary part tmp_insert_202104_36_36_0 to 202104_158201_158201_0.
2021.04.22 10:41:15.285451 [ 7376 ] {} <Trace> SystemLog (system.part_log): Flushed system log
2021.04.22 10:41:15.307001 [ 7430 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 18 entries to flush
2021.04.22 10:41:15.307709 [ 7430 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 34.20 GiB.
2021.04.22 10:41:15.308587 [ 7430 ] {} <Trace> system.query_thread_log: Renaming temporary part tmp_insert_202104_36_36_0 to 202104_158187_158187_0.
2021.04.22 10:41:15.308715 [ 7430 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log
2021.04.22 10:41:15.316645 [ 7421 ] {} <Trace> SystemLog (system.query_log): Flushing system log, 12 entries to flush
2021.04.22 10:41:15.317511 [ 7421 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 34.20 GiB.
2021.04.22 10:41:15.318236 [ 7421 ] {} <Trace> system.query_log: Renaming temporary part tmp_insert_202104_36_36_0 to 202104_158178_158178_0.
2021.04.22 10:41:15.318509 [ 7421 ] {} <Trace> SystemLog (system.query_log): Flushed system log
2021.04.22 10:41:17.187182 [ 7360 ] {} <Debug> MemoryTracker: Current memory usage: 2.00 GiB.
2021.04.22 10:41:17.409919 [ 7445 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: GET, Address: 10.156.0.2:41440, User-Agent: python-requests/2.25.1, Content Type: , Transfer Encoding: identity
2021.04.22 10:41:49.012835 [ 7757 ] {} <Information> SentryWriter: Sending crash reports is disabled
2021.04.22 10:41:49.015361 [ 7757 ] {} <Trace> Pipe: Pipe capacity is 1.00 MiB
2021.04.22 10:41:49.166031 [ 7757 ] {} <Information> : Starting ClickHouse 20.8.16.20 with revision 54438, build id: 466AA0409C2F4D82, PID 7757
2021.04.22 10:41:49.166131 [ 7757 ] {} <Information> Application: starting up
2021.04.22 10:41:49.171628 [ 7757 ] {} <Information> StatusFile: Status file /mnt/disks/tb/clickhouse/status already exists - unclean restart. Contents:
PID: 7328
Started at: 2021-04-22 10:36:45
Revision: 54438
2021.04.22 10:41:49.171717 [ 7757 ] {} <Debug> Application: rlimit on number of file descriptors is 500000
```
Error logs around that time (edited to remove column names)
```
2021.04.22 10:41:02.031781 [ 7434 ] {f588880d-1010-4d70-91dd-ab697c132384} <Error> executeQuery: Code: 252, e.displayText() = DB::Exception: Too many parts (300). Merges are processing significantly slower than inserts. (version 20.8.16.20 (official bui
ld)) (from 127.0.0.1:56194) (in query: INSERT INTO `d_073c5e`.`t_3b2dc2a7eadf4220826aa88a53b561b4` (`XXXXXXXXXXXXXXXXXXXXXXXX) VALUES), Stack trace (when copying this message, always include the lines below):
0. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18cdacf0 in /usr/lib/debug/usr/bin/clickhou
se
1. /build/obj-x86_64-linux-gnu/../src/Common/Exception.cpp:37: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xe63533d in /usr/lib/debug/usr/bin/clickhouse
2. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/string:2134: DB::MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event*) const (.cold) @ 0x16111d2a in /usr/lib/debug/usr/bin/clickhouse
3. /build/obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp:20: DB::MergeTreeBlockOutputStream::write(DB::Block const&) @ 0x160dc745 in /usr/lib/debug/usr/bin/clickhouse
4. /build/obj-x86_64-linux-gnu/../src/DataStreams/PushingToViewsBlockOutputStream.cpp:156: DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0x1597c046 in /usr/lib/debug/usr/bin/clickhouse
5. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:1541: DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0x159bf8cf in /usr/lib/debug/usr/bin/clickhouse
6. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:1541: DB::SquashingBlockOutputStream::finalize() @ 0x15981697 in /usr/lib/debug/usr/bin/clickhouse
7. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3826: DB::SquashingBlockOutputStream::writeSuffix() @ 0x1598177d in /usr/lib/debug/usr/bin/clickhouse
8. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:502: DB::TCPHandler::processInsertQuery(DB::Settings const&) @ 0x163a6bca in /usr/lib/debug/usr/bin/clickhouse
9. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:263: DB::TCPHandler::runImpl() @ 0x163a7dcb in /usr/lib/debug/usr/bin/clickhouse
10. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1219: DB::TCPHandler::run() @ 0x163a8470 in /usr/lib/debug/usr/bin/clickhouse
11. /build/obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x18bf89bb in /usr/lib/debug/usr/bin/clickhouse
12. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:856: Poco::Net::TCPServerDispatcher::run() @ 0x18bf8f48 in /usr/lib/debug/usr/bin/clickhouse
13. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/Mutex_POSIX.h:59: Poco::PooledThread::run() @ 0x18d77a36 in /usr/lib/debug/usr/bin/clickhouse
14. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/AutoPtr.h:223: Poco::ThreadImpl::runnableEntry(void*) @ 0x18d72e30 in /usr/lib/debug/usr/bin/clickhouse
15. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
16. /build/glibc-S9d2JN/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: clone @ 0x12171f in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
2021.04.22 10:41:03.527826 [ 7434 ] {287e3a6b-7823-48a5-84e9-908fa8386edd} <Error> executeQuery: Code: 252, e.displayText() = DB::Exception: Too many parts (300). Merges are processing significantly slower than inserts. (version 20.8.16.20 (official bui
ld)) (from 127.0.0.1:56198) (in query: INSERT INTO `d_073c5e`.`t_3b2dc2a7eadf4220826aa88a53b561b4` (XXXXXXXXXXXXXXXXXXXXXXXXX) VALUES), Stack trace (when copying this message, always include the lines below):
0. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18cdacf0 in /usr/lib/debug/usr/bin/clickhou
se
1. /build/obj-x86_64-linux-gnu/../src/Common/Exception.cpp:37: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xe63533d in /usr/lib/debug/usr/bin/clickhouse
2. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/string:2134: DB::MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event*) const (.cold) @ 0x16111d2a in /usr/lib/debug/usr/bin/clickhouse
3. /build/obj-x86_64-linux-gnu/../src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp:20: DB::MergeTreeBlockOutputStream::write(DB::Block const&) @ 0x160dc745 in /usr/lib/debug/usr/bin/clickhouse
4. /build/obj-x86_64-linux-gnu/../src/DataStreams/PushingToViewsBlockOutputStream.cpp:156: DB::PushingToViewsBlockOutputStream::write(DB::Block const&) @ 0x1597c046 in /usr/lib/debug/usr/bin/clickhouse
5. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:1541: DB::AddingDefaultBlockOutputStream::write(DB::Block const&) @ 0x159bf8cf in /usr/lib/debug/usr/bin/clickhouse
6. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/__hash_table:1541: DB::SquashingBlockOutputStream::finalize() @ 0x15981697 in /usr/lib/debug/usr/bin/clickhouse
7. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:3826: DB::SquashingBlockOutputStream::writeSuffix() @ 0x1598177d in /usr/lib/debug/usr/bin/clickhouse
8. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:502: DB::TCPHandler::processInsertQuery(DB::Settings const&) @ 0x163a6bca in /usr/lib/debug/usr/bin/clickhouse
9. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:263: DB::TCPHandler::runImpl() @ 0x163a7dcb in /usr/lib/debug/usr/bin/clickhouse
10. /build/obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1219: DB::TCPHandler::run() @ 0x163a8470 in /usr/lib/debug/usr/bin/clickhouse
11. /build/obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x18bf89bb in /usr/lib/debug/usr/bin/clickhouse
12. /build/obj-x86_64-linux-gnu/../contrib/libcxx/include/atomic:856: Poco::Net::TCPServerDispatcher::run() @ 0x18bf8f48 in /usr/lib/debug/usr/bin/clickhouse
13. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/Mutex_POSIX.h:59: Poco::PooledThread::run() @ 0x18d77a36 in /usr/lib/debug/usr/bin/clickhouse
14. /build/obj-x86_64-linux-gnu/../contrib/poco/Foundation/include/Poco/AutoPtr.h:223: Poco::ThreadImpl::runnableEntry(void*) @ 0x18d72e30 in /usr/lib/debug/usr/bin/clickhouse
15. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
16. /build/glibc-S9d2JN/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: clone @ 0x12171f in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
2021.04.22 10:41:54.661816 [ 7833 ] {} <Warning> d_073c5e.t_3b2dc2a7eadf4220826aa88a53b561b4: Removing temporary directory /mnt/disks/tb/clickhouse/data/d_073c5e/t_3b2dc2a7eadf4220826aa88a53b561b4/tmp_mut_all_70357_70357_0_87295/
2021.04.22 10:41:54.680371 [ 7833 ] {} <Warning> d_073c5e.t_3b2dc2a7eadf4220826aa88a53b561b4: Removing temporary directory /mnt/disks/tb/clickhouse/data/d_073c5e/t_3b2dc2a7eadf4220826aa88a53b561b4/tmp_mut_all_2599_4160_9_87295/
```
I'm not sure if this is an issue of infinite recursion, the recursion being too big for the stack size or some other issue coming from the fact that there are either many mutations or they are too big. Any hints to identify and address the issue would be greatly appreciated.
**Additional context**
Server memory:
```
$ free -m
total used free shared buff/cache available
Mem: 64322 4878 25450 19 33993 58625
Swap: 0 0 0
```
Service limits at runtime:
```
cat /proc/6925/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 10737418240 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 257114 257114 processes
Max open files 500000 500000 files
Max locked memory 67108864 67108864 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 257114 257114 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
``` | https://github.com/ClickHouse/ClickHouse/issues/23503 | https://github.com/ClickHouse/ClickHouse/pull/32327 | 30963dcbc2e97b831677415314bccd5275702884 | 4e6bf2456c8caa3fc718b537d64ae1b44d6e2ab7 | "2021-04-22T10:52:14Z" | c++ | "2021-12-08T11:23:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 23,435 | ["src/Storages/System/IStorageSystemOneBlock.h", "src/Storages/System/StorageSystemDictionaries.cpp", "src/Storages/System/StorageSystemDictionaries.h", "tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference", "tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql"] | Can`t connect to clickhouse with tabix after update to ch version 21.4.4.30 | Can`t connect to direct ch with tabix after update to version 21.4.4.30 with error
Code: 47, e.displayText() = DB::Exception: Missing columns: 'key' while processing query: 'SELECT name, key, attribute.names, attribute.types FROM system.dictionaries ARRAY JOIN attribute ORDER BY name ASC, attribute.names ASC', required columns: 'name' 'key', maybe you meant: ['name'], arrayJoin columns: 'attribute.names' 'attribute.types' (version 21.4.4.30 (official build))
system.dictionaries have been modified at version 21.4.4.30
[https://github.com/ClickHouse/ClickHouse/commit/a53c90e509d0ab9596e73747f085cf0191284311?branch=a53c90e509d0ab9596e73747f085cf0191284311&diff=unified](url) @kitaisreal
| https://github.com/ClickHouse/ClickHouse/issues/23435 | https://github.com/ClickHouse/ClickHouse/pull/23458 | 8134c270a2f468deaa6534be8765588cd61b1076 | 3df3acc9706d9ecc34cd078142c370150649f02d | "2021-04-21T13:27:03Z" | c++ | "2021-04-22T05:42:15Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.