status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | nektos/act | https://github.com/nektos/act | 174 | ["go.mod", "go.sum"] | Conflict: container name already in use | When trying to run `act` on https://github.com/doctrine/sql-formatter, I'm getting this output:
```
[Continuous Integration/Static Analysis with Psalm ] π Start image=node:12.6-buster-slim
[Continuous Integration/Unit tests-1 ] π§ͺ Matrix: map[php-version:7.2]
[Continuous Integration/Unit tests-1 ] π Start image=node:12.6-buster-slim
[Continuous Integration/Coding Standards ] π§ͺ Matrix: map[php-version:7.2]
[Continuous Integration/Coding Standards ] π Start image=node:12.6-buster-slim
[Continuous Integration/Unit tests-2 ] π§ͺ Matrix: map[php-version:7.3]
[Continuous Integration/Unit tests-2 ] π Start image=node:12.6-buster-slim
[Continuous Integration/Unit tests-3 ] π§ͺ Matrix: map[php-version:7.4]
[Continuous Integration/Unit tests-3 ] π Start image=node:12.6-buster-slim
[Continuous Integration/Static Analysis with PHPStan] π Start image=node:12.6-buster-slim
Error: Error response from daemon: Conflict. The container name "/act-Continuous-Integration-Sta" is already in use by container "a09e515b212dcc32454ee2ddf11e5a8f56556d5179df6af90659f3d0efedde50". You have to remove (or rename) that container to be able to reuse that name.
```
Note how I have a workflow named `Continuous Integration/Static Analysis with Psalm`, and another named `Continuous Integration/Static Analysis with PHPStan`. A fix might be to append a hash of the workflow name to the container name, dunno why it's truncated in the first place though. Is there a limit to the length of docker container names? | https://github.com/nektos/act/issues/174 | https://github.com/nektos/act/pull/1311 | 12029e3df5841538a806293881f836f88ec07a2c | cf9d82f69daab9dc907b893a38307a17f444c529 | "2020-03-29T10:44:16Z" | go | "2022-08-22T02:17:13Z" |
closed | nektos/act | https://github.com/nektos/act | 162 | ["go.mod", "go.sum"] | uses: '.' fails with runtime error: index out of range | I have a repository which contains:
- a github action in the root directory
- a github workflow for testing
Unfortunately I cannot share this repository as it is private but the problem should be easy to reproduce.
In the workflow I use the action which is located in the repository root.
```
- name: test action
uses: '.'
with:
...
```
Now when I run this workflow using act the build crashes:
```
[test/build] β test action
panic: runtime error: index out of range
goroutine 18 [running]:
github.com/nektos/act/pkg/runner.newRemoteAction(0x130c24e, 0x1, 0xc8d1a0)
/home/runner/work/act/act/pkg/runner/step_context.go:321 +0x211
github.com/nektos/act/pkg/runner.(*StepContext).Executor(0xc0003f0080, 0xcabcdd)
/home/runner/work/act/act/pkg/runner/step_context.go:60 +0x2fe
github.com/nektos/act/pkg/runner.(*RunContext).newStepExecutor.func1(0xdea3e0, 0xc0003cdb60, 0x0, 0x0)
/home/runner/work/act/act/pkg/runner/run_context.go:204 +0x349
github.com/nektos/act/pkg/common.Executor.Then.func1(0xdea3e0, 0xc0003cdb60, 0xdf5ee0, 0xc00035a520)
/home/runner/work/act/act/pkg/common/executor.go:146 +0x17c
github.com/nektos/act/pkg/common.Executor.Then.func1(0xdea3e0, 0xc0003cdb60, 0x1, 0x0)
/home/runner/work/act/act/pkg/common/executor.go:133 +0x4c
github.com/nektos/act/pkg/common.Executor.If.func1(0xdea3e0, 0xc0003cdb60, 0xc00035a4b0, 0xa)
/home/runner/work/act/act/pkg/common/executor.go:154 +0x6a
github.com/nektos/act/pkg/runner.(*runnerImpl).NewPlanExecutor.func1(0xdea3e0, 0xc0003665d0, 0xc0003e39a0, 0xc000060f70)
/home/runner/work/act/act/pkg/runner/runner.go:74 +0x23a
github.com/nektos/act/pkg/common.Executor.ChannelError.func1(0xdea3e0, 0xc0003665d0, 0xc0003e39a0, 0x0)
/home/runner/work/act/act/pkg/common/executor.go:125 +0x45
github.com/nektos/act/pkg/common.NewParallelExecutor.func1.1(0xc0003e3960, 0xc00039e060, 0xdea3e0, 0xc0003665d0)
/home/runner/work/act/act/pkg/common/executor.go:101 +0x56
created by github.com/nektos/act/pkg/common.NewParallelExecutor.func1
/home/runner/work/act/act/pkg/common/executor.go:100 +0xb7
```
My work around is to copy the action to a sub-directory `build` first and adapt the workflow to use it from there:
```
- name: test action
uses: './build'
with:
...
```
During execution with act you can see that the uses String is used to calculate the Docker image tag. Maybe this is related.
```
docker build -t act-build:latest <path-to>/build
``` | https://github.com/nektos/act/issues/162 | https://github.com/nektos/act/pull/1239 | 50f0b0e7f49d368583a30e515fb16db58c357bbc | 4d9d6ecc924bef6c5976acd7f224558e21fa5b5f | "2020-03-17T08:43:52Z" | go | "2022-07-04T02:23:04Z" |
closed | nektos/act | https://github.com/nektos/act | 161 | ["pkg/container/docker_run.go", "pkg/container/docker_run_test.go", "pkg/container/testdata/Dockerfile", "pkg/runner/run_context.go", "pkg/runner/step_context.go"] | Maven Step Fails | Expected behavior:
- Maven commands (`mvn`) execute when using ` act -P ubuntu-latest=nektos/act-environments-ubuntu:18.04-full`
Actual behavior:
- Maven builds fail due to Maven not being installed.
```
| /github/workflow/3: line 2: mvn: command not found
[Build, Test, and Upload/Java Application Container] β Failure - Compile and Test Using Maven
````
The environments image appears to be either out-of-date, or does not install the same tools/dependencies as the GitHub virtual-environments. This means that currently, Java builds that can be executed directly from GitHub Actions cannot be ran locally. | https://github.com/nektos/act/issues/161 | https://github.com/nektos/act/pull/828 | 7a426a0f37d21e32a01e85d9403edba17a54d545 | 6c60af76779b739b7487e3fbe83ea15cd2ab8ad5 | "2020-03-16T20:19:45Z" | go | "2021-09-27T19:01:14Z" |
closed | nektos/act | https://github.com/nektos/act | 151 | ["go.mod", "go.sum"] | Env context is not being replaced using expressions | Act does not replace the `env` context when using expressions in workflow files
## Issue
When using ``${{ env.WHATEVER }}`` in a workflow file, `act` does not replace it by the actual environment variable and returns undefined
## Reproducing
I've created a repo where this is easily reproducible
https://github.com/davidlj95/act-bug
You can see in the Actions tab that GitHub is replacing the env var though when running `act` locally it does not and returns `undefined`
Thanks! | https://github.com/nektos/act/issues/151 | https://github.com/nektos/act/pull/1321 | b23bbefc365012886192e477a88d38a0909ecba1 | 9b146980dfabd14e9b9defea8f894ad8ab24132a | "2020-03-13T10:55:46Z" | go | "2022-08-29T02:17:28Z" |
closed | nektos/act | https://github.com/nektos/act | 148 | ["go.mod", "go.sum"] | Unable to install python | Others have posted this same issue and those issues have been marked as closed, but I haven't seen a solution.
Related tickets:
* https://github.com/nektos/act/issues/90
* https://github.com/nektos/act/issues/100
Upon running the `act`, python fails to install.
Thanks in advance for the help.
### Action YAML
```
name: CI
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/[email protected] # or v1
with:
python-version: 3.7 # have also tried omitting this
```
### Log
```
vagrant@ubuntu-xenial:~/frontend$ act
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ Start image=node:12.6-buster-slim
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ docker run image=node:12.6-buster-slim entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ docker cp src=/home/vagrant/frontend/. dst=/github/workspace
[TAP CI - dev/Deploy TAP package] ββ Run actions/checkout@v2
[TAP CI - dev/Deploy TAP package] β
β
Success - actions/checkout@v2
[TAP CI - dev/Deploy TAP package] ββ Run actions/[email protected]
[TAP CI - dev/Deploy TAP package] β git clone 'https://github.com/actions/setup-python' # ref=v1.1.1
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ docker cp src=/home/vagrant/.cache/act/[email protected] dst=/actions/
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ ##[debug]Semantic version spec of 3.7 is 3.7
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ ##[debug]isExplicit:
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ ##[debug]explicit? false
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ ##[debug]evaluating 0 versions
[TAP CI - dev/Deploy TAP package] οΏ½οΏ½ ##[debug]match not found
[TAP CI - dev/Deploy TAP package] ββ ##[error]Version 3.7 with arch x64 not found%0AAvailable versions:%0A%0A
[TAP CI - dev/Deploy TAP package] ββ Failure - actions/[email protected]
Error: exit with `FAILURE`: 1
``` | https://github.com/nektos/act/issues/148 | https://github.com/nektos/act/pull/1830 | 481999f59da465d2d8743fd3d97e24bb6e1a2311 | b0996e057750223a14321ff92c1c6d6fbb8e1a10 | "2020-03-12T22:10:48Z" | go | "2023-05-29T03:35:06Z" |
closed | nektos/act | https://github.com/nektos/act | 145 | ["go.mod", "go.sum"] | Seemingly invalid path separator on Windows using actions/setup-node | Running command
```powershell
act -P ubuntu-latest=nektos/act-environments-ubuntu:18.04
```
on `Microsoft Windows 10 Pro Build 10.0.18363` using `release v0.2.6`
```yaml
- name: Set up Node.js version 13.x
uses: actions/setup-node@v1
with:
node-version: 13.x
```
results in
```powershell
[Continuous Delivery/Node 13.x] οΏ½π³ docker cp src=act/actions-setup-node@v1 dst=/actions\
| internal/modules/cjs/loader.js:985
| throw err;
| ^
|
| Error: Cannot find module '/github/workspace/\actions\actions-setup-node@v1\dist\index.js'
| at Function.Module._resolveFilename (internal/modules/cjs/loader.js:982:15)
| at Function.Module._load (internal/modules/cjs/loader.js:864:27)
| at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)
| at internal/main/run_main_module.js:18:47 {
| code: 'MODULE_NOT_FOUND',
| requireStack: []
| }
[Continuous Delivery/Node 13.x] β Failure - Set up Node.js version 13.x
Error: exit with `FAILURE`: 1
```
This seems to be an issue with Windows and POSIX not using the same path separators. However I don't know where this issue lies, if its with [nektos/act](https://github.com/nektos/act) or [actions/setup-node](https://github.com/actions/setup-node). If you can't reproduce the issue it occurs currently with [trutoo/event-bus](https://github.com/trutoo/event-bus). It does however work in in my WSL. | https://github.com/nektos/act/issues/145 | https://github.com/nektos/act/pull/1311 | 12029e3df5841538a806293881f836f88ec07a2c | cf9d82f69daab9dc907b893a38307a17f444c529 | "2020-03-11T11:42:18Z" | go | "2022-08-22T02:17:13Z" |
closed | nektos/act | https://github.com/nektos/act | 141 | ["go.mod", "go.sum"] | [Question] is act container with hostNetwork supported? | Hi, I'm trying to use an [action](https://github.com/engineerd/setup-kind) that creates a [kind](https://kind.sigs.k8s.io/) cluster, which works just fine but this action opens a port in 127.0.0.1, which is not possible to reach from any step. Is it possible to use hostNetwork or what is the suggestion for this scenario?
Thanks | https://github.com/nektos/act/issues/141 | https://github.com/nektos/act/pull/1379 | c4be1cfd6d8ab6664c3a3ffd7c4e5e253fa0b77f | c7ccb0dde9ab6332af7940e6f61e3ab73685fd07 | "2020-03-09T14:50:07Z" | go | "2022-10-10T02:37:39Z" |
closed | nektos/act | https://github.com/nektos/act | 128 | ["go.mod", "go.sum"] | Running an action in its own repository results in an empty action name | act Version: 0.2.5
OS: macOS & Windows 10
When starting act in [cedrickring/golang-action](https://github.com/cedrickring/golang-action) the Docker build fails with `Error: invalid reference format` (Command: `docker build -t act-:latest <workdir>`).
On [this line](https://github.com/nektos/act/blob/1f9f3b826e9cf56ddbfb4ad0c796b5a4da4b32a1/pkg/runner/step_context.go#L244) the action name is becoming an empty string.
### Steps to reproduce
1. Clone [cedrickring/golang-action](https://github.com/cedrickring/golang-action)
2. Enter the repostory directory
3. Run `act`
| https://github.com/nektos/act/issues/128 | https://github.com/nektos/act/pull/1106 | 407d324ec1a5d9242114971e9e18e8469538924c | 065d630af01b4ed2a0c797d54933ea3b65c3cb38 | "2020-03-06T17:55:39Z" | go | "2022-04-05T13:48:53Z" |
closed | nektos/act | https://github.com/nektos/act | 106 | ["go.mod", "go.sum"] | Is runs-on ignored? | I am getting my workflow run on node image, even though it requires ubuntu:
```
β― act
[Foo/build] π Start image=node:12.6-buster-slim
[Foo/build] π³ docker run image=node:12.6-buster-slim entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Foo/build] π³ docker cp src=/Users/foo/Sites/Foo/. dst=/github/workspace
[Foo/build] β Run Install MySQL Client
Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
Get:1 http://security-cdn.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [49.3 kB]
Get:4 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:5 http://security-cdn.debian.org/debian-security buster/updates/main amd64 Packages [181 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [7380 B]
Fetched 8331 kB in 3s (2453 kB/s)
Reading package lists... Done
Building dependency tree !
Reading state information... Done
| 35 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree !
Reading state information... Done
| Package libmysqlclient-dev is not available, but is referred to by another package.
| This may mean that the package is missing, has been obsoleted, or
| is only available from another source
| However the following packages replace it:
| libmariadb-dev-compat libmariadb-dev
|
| Package mysql-client-5.7 is not available, but is referred to by another package.
| This may mean that the package is missing, has been obsoleted, or
| is only available from another source
| However the following packages replace it:
| mariadb-server-core-10.3 mariadb-client-10.3
|
| E: Package 'mysql-client-5.7' has no installation candidate
| E: Package 'libmysqlclient-dev' has no installation candidate
[Foo/build] β Failure - Install MySQL Client
Error: exit with `FAILURE`: 100
```
This is stripped down workflow:
```
β― cat .github/workflows/foo.yml
name: Foo
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Install MySQL Client
run: |
apt update
apt install mysql-client-5.7 libmysqlclient-dev
``` | https://github.com/nektos/act/issues/106 | https://github.com/nektos/act/pull/1018 | 18792f9620fd2fd7dacc53ae30a234fbfc7c81b1 | 9a54c99ac76a2667551bfd85c5aabcabc969e9dd | "2020-02-26T09:46:13Z" | go | "2022-02-28T14:35:14Z" |
closed | nektos/act | https://github.com/nektos/act | 104 | ["pkg/runner/expression.go", "pkg/runner/expression_test.go"] | Unable to interpolate string - hyphenated variables | I'm facing an issue when I'm trying to reference a `steps.<id>.outputs.<variable>` with an id composed with `-`.
```
Run echo ${{ steps.login-ecr.outputs.registry }}
ERRO[0005] Unable to interpolate string 'echo ${{ steps.login-ecr.outputs.registry }}' - [ReferenceError: 'ecr' is not defined]
```
_Workaround; remove all `-` in id names._ | https://github.com/nektos/act/issues/104 | https://github.com/nektos/act/pull/287 | 64b8d2afa47ff6938d7617e00f1260a95c35268e | 7dcd0bc1bb0cc4bce97d96ed0b854ea3efe44c49 | "2020-02-25T21:47:54Z" | go | "2020-06-24T14:08:45Z" |
closed | nektos/act | https://github.com/nektos/act | 22 | ["pkg/runner/action_cache.go", "pkg/runner/action_cache_test.go"] | workflow parser too permissive with local path references to actions in uses clauses | act version: 0.0.5
platform: OSX 10.14.2
When referencing actions that are contained in sub directories of the same repo. i.e.:
action "Lint" {
**uses = "./.github/actions/go"**
args = "lint"
}
the act parser allows you to write the uses clause without "./" at the start like this:
action "Lint" {
**uses = ".github/actions/go"**
args = "lint"
}
Although this is very correct and will run locally, the GitHub action parser won't be as permissive and will emit a parser error when you push your code
Behaviors should be aligned...
| https://github.com/nektos/act/issues/22 | https://github.com/nektos/act/pull/2208 | 852959e1e14bf5cd9c0284d15ca899bef68ff4af | 5601fb0e136fad09282705ef65cbc9b2c6f0bebf | "2019-01-24T18:21:24Z" | go | "2024-02-18T03:53:22Z" |
closed | nektos/act | https://github.com/nektos/act | 2 | ["pkg/container/docker_run.go", "pkg/container/docker_run_test.go"] | act -l broken for a run | workflow action looks like:
```
action "docker-login" {
uses = "docker://docker"
runs = ["sh", "-c", "echo $DOCKER_AUTH | docker login --username $REGISTRY_USER --password-stdin"]
secrets = ["DOCKER_AUTH"]
env = {
REGISTRY_USER = "jess"
}
}
```
I get
```
$ act -l
Error: At 8:10: root.Action.docker-login.Runs: unknown type for string *ast.ListType
```
Maybe it's the pipe or env variable, haven't dug in yet :)
| https://github.com/nektos/act/issues/2 | https://github.com/nektos/act/pull/2242 | 352ad41ad2b8a205b12442859280b0e938b7e4ab | 119ceb81d906be5e1b23bd085f642ffde7144ef5 | "2019-01-15T17:01:06Z" | go | "2024-03-08T01:25:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,747 | ["src/IO/S3/Requests.cpp", "src/IO/S3/Requests.h"] | INSERT INTO s3 function fails to perform multipart upload when setting custom metadata headers | **Describe what's wrong**
When executing an `INSERT INTO s3(...)` query, if a metadata header (`x-amz-meta-`) is set and the resulting file is large enough to produce a multipart upload (50 MB by default), the insertion & upload to S3 fails with the following error:
```
[Error] Unable to parse ExceptionName: InvalidArgument Message: Metadata cannot be specified in this context. (S3_ERROR)
```
**Does it reproduce on the most recent release?**
Yes. Tested with both versions `23.12` and master, fails in both.
**How to reproduce**
All default settings, try to upload the file to actual S3, doesn't happen in MINIO
```
INSERT INTO FUNCTION s3('https://my-bucket.s3.eu-west-3.amazonaws.com/myfile.csv', 'REDACTED', 'REDACTED', 'CSV', headers('x-amz-meta-my-meta-header' = 'value')) SELECT *
FROM generateRandom('uuid UUID, context Text, dt Datetime')
LIMIT 1000000
```
This will generate a 50+ MB CSV file that will trigger a multipart upload, and then will fail with the error above.
**Expected behavior**
The file is upload successfully to S3 if I specify custom metadata headers even if a multipart upload is used.
**Error message and/or stacktrace**
```
Code: 499. DB::Exception: Unable to parse ExceptionName: InvalidArgument Message: Metadata cannot be specified in this context. (S3_ERROR) (version 23.12.2.59 (official build))
stack_trace:
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000bae58c8 in /usr/bin/clickhouse
1. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::writePart(DB::WriteBufferFromS3::PartData&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000000f0a90c4 in /usr/bin/clickhouse
2. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::WriteBufferFromS3::TaskTracker::add(std::function<void ()>&&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000000f0acc84 in /usr/bin/clickhouse
3. std::__packaged_task_func<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'(), std::allocator<std::function<std::future<void> (std::function<void ()>&&, Priority)> DB::threadPoolCallbackRunner<void, std::function<void ()>>(ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&, String const&)::'lambda'(std::function<void ()>&&, Priority)::operator()(std::function<void ()>&&, Priority)::'lambda'()>, void ()>::operator()() @ 0x000000000f089fd0 in /usr/bin/clickhouse
4. std::packaged_task<void ()>::operator()() @ 0x000000000e983260 in /usr/bin/clickhouse
5. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000bbb0c10 in /usr/bin/clickhouse
6. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000bbb3d00 in /usr/bin/clickhouse
7. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000bbb2d08 in /usr/bin/clickhouse
8. start_thread @ 0x0000000000007624 in /usr/lib/aarch64-linux-gnu/libpthread-2.31.so
9. ? @ 0x00000000000d162c in /usr/lib/aarch64-linux-gnu/libc-2.31.so
```
**Additional context**
Only happens when uploading data to actual S3, haven't been able to replicate it with MinIO or GCS
| https://github.com/ClickHouse/ClickHouse/issues/60747 | https://github.com/ClickHouse/ClickHouse/pull/60748 | 6fa2d4a199919dbedde6fe12d593eea6f47181f2 | 71029373f25c80a8e8ac1ab7d8f0d5a8b49d778d | "2024-03-04T08:51:02Z" | c++ | "2024-03-04T16:04:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,726 | ["src/DataTypes/getLeastSupertype.cpp", "tests/queries/0_stateless/03003_enum_and_string_compatible.reference", "tests/queries/0_stateless/03003_enum_and_string_compatible.sql"] | Enum and String types should be compatible | **Use case**
```
milovidov-desktop :) WITH 'Hello'::Enum8('Hello', 'World') AS enum SELECT [enum, 'Goodbye']
WITH CAST('Hello', 'Enum8(\'Hello\', \'World\')') AS enum
SELECT [enum, 'Goodbye']
Query id: 4152f5df-df8f-4570-8a47-8fe38e76b439
Elapsed: 0.022 sec.
Received exception:
Code: 386. DB::Exception: There is no supertype for types Enum8('Hello' = 1, 'World' = 2), String because some of them are String/FixedString and some of them are not: While processing [CAST('Hello', 'Enum8(\'Hello\', \'World\')') AS enum, 'Goodbye']. (NO_COMMON_TYPE)
``` | https://github.com/ClickHouse/ClickHouse/issues/60726 | https://github.com/ClickHouse/ClickHouse/pull/60727 | ce6dec65cf20b4df756a9dc2fc63d655a634cf47 | 0d3271e54829fa7924a706ec5c85645668f5e6ca | "2024-03-03T21:41:31Z" | c++ | "2024-03-04T15:58:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,653 | ["src/Functions/FunctionBinaryArithmetic.h", "src/Functions/IsOperation.h", "tests/queries/0_stateless/03002_int_div_decimal_with_date_bug.reference", "tests/queries/0_stateless/03002_int_div_decimal_with_date_bug.sql"] | Logical error: IntDiv with decimal + date | Source report: https://s3.amazonaws.com/clickhouse-test-reports/0/1b042be13b68c3c0f1a99a22115bd71e003d90a9/ast_fuzzer__asan_.html
Simplest repo:
```
SELECT intDiv(CAST('1.0', 'Decimal256(3)'), today())
```
```
<Fatal> : Logical error: 'Arguments of 'intDiv' have incorrect data types: 'CAST('1.0', 'Decimal256(3)')' of type 'Decimal(76, 3)', 'today()' of type 'Date''
``` | https://github.com/ClickHouse/ClickHouse/issues/60653 | https://github.com/ClickHouse/ClickHouse/pull/60672 | 705311edb99681459606e31f58a760c43374b38d | d93dd98b66ee8e44b0c99a0064fa68d98d93c7e9 | "2024-03-01T15:15:52Z" | c++ | "2024-03-05T12:41:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,495 | ["src/Functions/array/FunctionArrayMapped.h", "tests/queries/0_stateless/03001_bad_error_message_higher_order_functions.reference", "tests/queries/0_stateless/03001_bad_error_message_higher_order_functions.sh"] | Misleading error message of SIZES_OF_ARRAYS_DONT_MATCH | Hello! In recent Clickhouse versions this error is misleading (especially on large queries):
```sql
SELECT arrayMap((x,y) -> x + y, [1,2,3], [1,2])
```
> Received exception from server (version 24.2.1):
Code: 190. DB::Exception: Received from localhost:9000. DB::Exception: Arrays passed to arrayMap must have equal size. Argument 2 has size 1, but expected 1: while executing 'FUNCTION arrayMap(__lambda :: 2, [1, 2, 3] :: 0, [1, 2] :: 1) -> arrayMap(lambda(tuple(x, y), plus(x, y)), [1, 2, 3], [1, 2]) Array(UInt16) : 3'. (SIZES_OF_ARRAYS_DONT_MATCH)
(query: SELECT arrayMap((x,y) -> x + y, [1,2,3], [1,2])
This part of the message is definitely wrong: `Argument 2 has size 1, but expected 1`.
| https://github.com/ClickHouse/ClickHouse/issues/60495 | https://github.com/ClickHouse/ClickHouse/pull/60518 | 7f4d16e168d39277f8e1af386bd5188e2185d362 | 60be9f30a25ee0a270676f132c53f0b4a1934c9f | "2024-02-28T13:45:52Z" | c++ | "2024-03-03T01:06:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,366 | ["src/Functions/DateTimeTransforms.h", "tests/queries/0_stateless/02956_fix_to_start_of_milli_microsecond.reference", "tests/queries/0_stateless/02956_fix_to_start_of_milli_microsecond.sql"] | toStartOfInterval returns wrong result when interval is in miliseconds | The following two queries return different values with Clickhouse server 24.1
`select toStartOfInterval(toDateTime64('2015-01-01 00:01:34', 9), interval 100000 MILLISECONDS)`
`select toStartOfInterval(toDateTime64('2015-01-01 00:01:34', 9), interval 100 second)`
https://fiddle.clickhouse.com/00959742-a277-4ff9-9c37-28315f3fbc53
The second version is correct.
This issue doesn't exist with 23.12 or prior version. | https://github.com/ClickHouse/ClickHouse/issues/60366 | https://github.com/ClickHouse/ClickHouse/pull/60763 | 7989e9dc70895edd1db07f36b3918c9c538fd023 | 2eb7fa297b5e3ab1656bc1560990fe6bf1e66ed6 | "2024-02-23T19:08:51Z" | c++ | "2024-03-06T12:55:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,301 | ["docs/en/sql-reference/table-functions/merge.md", "src/TableFunctions/TableFunctionMerge.cpp", "tests/queries/0_stateless/01902_table_function_merge_db_params.reference", "tests/queries/0_stateless/01902_table_function_merge_db_params.sql"] | Single-argument version for the `merge` table function | **Use case**
```
clickhouse-cloud :) SELECT check_name, count() FROM merge('^text_log_') WHERE pull_request_number = 58640 AND commit_sha = 'a680fcb1f7389340b46cc38c555cc83e1a68e955' GROUP BY check_name
SELECT
check_name,
count()
FROM merge('^text_log_')
WHERE (pull_request_number = 58640) AND (commit_sha = 'a680fcb1f7389340b46cc38c555cc83e1a68e955')
GROUP BY check_name
Query id: 7d6e6b87-de2c-4853-a5d8-21c6898f22f1
Elapsed: 0.248 sec.
Received exception from server (version 24.2.1):
Code: 42. DB::Exception: Received from p12uiq1ogd.us-east-2.aws.clickhouse-staging.com:9440. DB::Exception: Table function 'merge' requires exactly 2 arguments - name of source database and regexp for table names.. (NUMBER_OF_ARGUMENTS_DOESNT_MATCH)
clickhouse-cloud :) SELECT check_name, count() FROM merge(default, '^text_log_') WHERE pull_request_number = 58640 AND commit_sha = 'a680fcb1f7389340b46cc38c555cc83e1a68e955' GROUP BY check_name
```
If there is a single argument - the current database will be used. | https://github.com/ClickHouse/ClickHouse/issues/60301 | https://github.com/ClickHouse/ClickHouse/pull/60372 | 795c1a98dcf1b9c955fa76d1750ba4cc3a004b7c | 1a9ee8c31886e5501b54fc853aaa6e065b221b6a | "2024-02-22T13:26:47Z" | c++ | "2024-02-26T12:43:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,232 | ["tests/queries/0_stateless/02998_analyzer_prewhere_report.reference", "tests/queries/0_stateless/02998_analyzer_prewhere_report.sql"] | New analyzer: createColumn() is not implemented for data type Function | Repro: https://fiddle.clickhouse.com/11e6f54d-a716-4adc-a643-ea7ba068a4f8
```sql
create table hits (date Date, data Array(UInt32)) engine=MergeTree PARTITION BY toYYYYMM(date) ORDER BY date;
insert into hits values('2024-01-01', [1, 2, 3])
select
hits.date,
arrayFilter(x -> x in (2, 3), data) as filtered
from
hits
where
arrayExists(x -> x in (2, 3), data)
settings allow_experimental_analyzer=1;
```
```
Code: 48. DB::Exception: Received from localhost:9000. DB::Exception: Method createColumn() is not implemented for data type Function(UInt32 -> UInt8). (NOT_IMPLEMENTED)
```
Note that this fails only if both the `arrayFilter` and `arrayExists` lines are present at the same time, removing any of these makes the query work. | https://github.com/ClickHouse/ClickHouse/issues/60232 | https://github.com/ClickHouse/ClickHouse/pull/60244 | 5c967c44de6238dea9b1a49309e014b4cb2b758e | 56f7ee740abfb2db629f3943e6ae24e06c398c73 | "2024-02-21T15:22:53Z" | c++ | "2024-02-26T11:00:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,163 | ["src/IO/S3/Client.cpp", "tests/queries/0_stateless/02966_s3_access_key_id_restriction.reference", "tests/queries/0_stateless/02966_s3_access_key_id_restriction.sql"] | S3 Error: Access key id has an invalid character after Udating to 24.1 - clickhouse won't startup anymore | After upgrading our intance of clickhouse version 23.7.5.30 to the current 24.1 release clickhouse hangs dureing startup and terminates his startup routine.
Some investigation brought us to the the result, that something shoul be wrong with the S3 connection to our minio service
`Code: 36. DB::Exception: Access key id has an invalid character: Cannot attach table `
We checked everthing but could not find any suspect thing in the set-up.
At the end we have to roll-back the version to get clickhouse up and running again.
With the same S3 setup everthing works as expeted and as it has before
**Expected behavior**
1. if clickhouse finds a problem with one or a few tables whilst loading, this should not interrupt the startup of the whole service
* ist the a server-setting to control this behaviour ? if e.g. in version 23.7.5.30 is a problem with a S3 table, this only matters whilst trying to access the data.
2. S3 connections should work as it did before
**Error message and/or stacktrace**
```
2024.02.19 17:30:26.471919 [ 9718 ] {} <Error> Application: Caught exception while loading metadata: Code: 695. DB::Exception: Load job 'load table p_DIZ_analytics.DIZ23_003_Triagedaten' failed:
Code: 36. DB::Exception: Access key id has an invalid character: Cannot attach table `p_DIZ_analytics`.`DIZ23_003_Triagedaten`
from metadata file /opt/data/clickhouse/store/146/146e3b77-35ba-4602-80fb-c4ea4261c8fe/DIZ23_003_Triagedaten.sql from query
ATTACH TABLE p_DIZ_analytics.DIZ23_003_Triagedaten UUID '6770ff97-436d-40f1-89c3-237e64c6690c' (`patient_id` String, `start_date` DateTime64(3), `ukf_ou_id` String, `modifier_cd` String,
`value_char` Nullable(String), `Jahr` UInt16, `concept_cd` String, `deleted` UInt8) ENGINE = S3('https://xxxxxxx:9000/diz-dataprojects/DIZ23-003_Triagedaten.csv', 'CSV').
(BAD_ARGUMENTS), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c7faf9b in /usr/bin/clickhouse
1. DB::Exception::Exception<>(int, FormatStringHelperImpl<>) @ 0x0000000007214283 in /usr/bin/clickhouse
3. DB::S3::ClientFactory::create(DB::S3::PocoHTTPClientConfiguration const&, DB::S3::ClientSettings, String const&, String const&, String const&, DB::S3::ServerSideEncryptionKMSConfig, std::vector<DB::HTTPHeaderEntry, std::allocator<DB::HTTPHeaderEntry>>, DB::S3::CredentialsConfiguration, String const&) @ 0x00000000101283f4 in /usr/bin/clickhouse
4. DB::StorageS3::Configuration::connect(std::shared_ptr<DB::Context const>) @ 0x0000000011fe0307 in /usr/bin/clickhouse
5. DB::StorageS3::Configuration::update(std::shared_ptr<DB::Context const>) @ 0x0000000011fdeb11 in /usr/bin/clickhouse
6. DB::StorageS3::updateConfiguration(std::shared_ptr<DB::Context const>) @ 0x0000000011fded87 in /usr/bin/clickhouse
7. DB::StorageS3::StorageS3(DB::StorageS3::Configuration const&, std::shared_ptr<DB::Context const>, DB::StorageID const&, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, String const&, std::optional<DB::FormatSettings>, bool, std::shared_ptr<DB::IAST>) @ 0x0000000011fd2e21 in /usr/bin/clickhouse
8. std::shared_ptr<DB::IStorage> std::__function::__policy_invoker<std::shared_ptr<DB::IStorage> (DB::StorageFactory::Arguments const&)>::__call_impl<std::__function::__default_alloc_func<DB::registerStorageS3Impl(String const&, DB::StorageFactory&)::$_0, std::shared_ptr<DB::IStorage> (DB::StorageFactory::Arguments const&)>>(std::__function::__policy_storage const*, DB::StorageFactory::Arguments const&) (.llvm.8425286739849586231) @ 0x0000000011ff4b1f in /usr/bin/clickhouse
9. DB::StorageFactory::get(DB::ASTCreateQuery const&, String const&, std::shared_ptr<DB::Context>, std::shared_ptr<DB::Context>, DB::ColumnsDescription const&, DB::ConstraintsDescription const&, bool) const @ 0x0000000011d63c1b in /usr/bin/clickhouse
10. DB::createTableFromAST(DB::ASTCreateQuery, String const&, String const&, std::shared_ptr<DB::Context>, bool) @ 0x00000000109308d3 in /usr/bin/clickhouse
11. DB::DatabaseOrdinary::loadTableFromMetadata(std::shared_ptr<DB::Context>, String const&, DB::QualifiedTableName const&, std::shared_ptr<DB::IAST> const&, DB::LoadingStrictnessLevel) @ 0x00000000109519dc in /usr/bin/clickhouse
12. void std::__function::__policy_invoker<void (DB::AsyncLoader&, std::shared_ptr<DB::LoadJob> const&)>::__call_impl<std::__function::__default_alloc_func<DB::DatabaseOrdinary::loadTableFromMetadataAsync(DB::AsyncLoader&, std::unordered_set<std::shared_ptr<DB::LoadJob>, std::hash<std::shared_ptr<DB::LoadJob>>, std::equal_to<std::shared_ptr<DB::LoadJob>>, std::allocator<std::shared_ptr<DB::LoadJob>>>, std::shared_ptr<DB::Context>, String const&, DB::QualifiedTableName const&, std::shared_ptr<DB::IAST> const&, DB::LoadingStrictnessLevel)::$_0, void (DB::AsyncLoader&, std::shared_ptr<DB::LoadJob> const&)>>(std::__function::__policy_storage const*, DB::AsyncLoader&, std::shared_ptr<DB::LoadJob> const&) @ 0x000000001095a2f3 in /usr/bin/clickhouse
13. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::AsyncLoader::spawn(DB::AsyncLoader::Pool&, std::unique_lock<std::mutex>&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000000c95f346 in /usr/bin/clickhouse
```
**S3 region not set - warning**
Further investigation found following warning in the log
```
2024.02.20 09:12:38.974104 [ 28468 ] {} <Information> AWSClient: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2024.02.20 09:12:38.974117 [ 28468 ] {} <Information> AWSClient: Request failed, now waiting 0 ms before attempting again.
2024.02.20 09:12:38.998374 [ 11583 ] {} <Information> AWSClient: Response status: 400, Bad Request
2024.02.20 09:12:38.998523 [ 11583 ] {} <Information> AWSClient: AWSErrorMarshaller: Encountered Unknown AWSError 'AuthorizationHeaderMalformed': The authorization header is malformed; the region is wrong; expecting 'us-west-2'.
2024.02.20 09:12:38.998593 [ 11583 ] {} <Information> AWSClient: AWSXmlClient: HTTP response code: 400
Resolved remote host IP address: xxxxxxxxx:9000
Request ID: 17B583FDA0510D59
Exception name: AuthorizationHeaderMalformed
Error message: Unable to parse ExceptionName: AuthorizationHeaderMalformed Message: The authorization header is malformed; the region is wrong; expecting 'us-west-2'.
```
even version 23.7.5.30 reports this error, everthing works fine. Maybe 24.1 handles this more strict, and the `Access key id has an invalid character` is a misleading error text
| https://github.com/ClickHouse/ClickHouse/issues/60163 | https://github.com/ClickHouse/ClickHouse/pull/60181 | 9abd28625f138feb663fc8ef479a3022c49a4cbf | 946d65dbdc1d5086e6f46fbfca6e2525a14d792a | "2024-02-20T08:10:22Z" | c++ | "2024-02-20T11:34:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,152 | ["docs/en/sql-reference/functions/bit-functions.md", "docs/en/sql-reference/functions/string-functions.md", "src/Functions/substring.cpp", "tests/queries/0_stateless/01033_function_substring.reference", "tests/queries/0_stateless/01033_function_substring.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Implement PostgreSQL `get_byte` function | **Use case**
trying to make UDF of UUIDv1 parser, there's some [solution](https://stackoverflow.com/questions/24178485/cast-or-extract-timestamp-from-v1-uuid-in-postgresql) that uses postgre's [get_byte](https://www.postgresql.org/docs/9.0/functions-binarystring.html), and I believe it would be quite useful
**Describe the solution you'd like**
implement `get_byte(string, position)` just like Postgres'
**Describe alternatives you've considered**
writing the UDF in golang or something | https://github.com/ClickHouse/ClickHouse/issues/60152 | https://github.com/ClickHouse/ClickHouse/pull/60494 | 4171ede55612529db585d1d82a932d6c5eafb154 | ed215a293afd78a635a71520b21c99a067296b17 | "2024-02-19T17:04:23Z" | c++ | "2024-03-01T10:23:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 60,017 | ["src/Functions/FunctionSQLJSON.h", "tests/queries/0_stateless/03003_sql_json_nonsense.reference", "tests/queries/0_stateless/03003_sql_json_nonsense.sql"] | Use of uninitialized value in DB::FunctionSQLJSONHelpers::Executor | Happened here: https://s3.amazonaws.com/clickhouse-test-reports/59563/ea89154f9f112411066eae617883c50cb741faa7/ast_fuzzer__msan_.html
```
==238==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x55a8ff54436b in DB::FunctionSQLJSONHelpers::Executor<DB::NameJSONValue, DB::JSONValueImpl<DB::SimdJSONParser, DB::JSONStringSerializer<DB::SimdJSONParser::Element, DB::SimdJSONElementFormatter>>, DB::SimdJSONParser>::run(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, unsigned int, std::__1::shared_ptr<DB::Context const> const&) (/workspace/
> clickhouse+0x7fef36b) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#1 0x55a8ff541454 in DB::FunctionSQLJSON<DB::NameJSONValue, DB::JSONValueImpl>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7fec454) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#2 0x55a8ff4bbc21 in DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66c21) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#3 0x55a8ff4bb557 in DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66557) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#4 0x55a9290d7a59 in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:246:15
#5 0x55a9290dbede in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:281:24
#6 0x55a9290e00e5 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:378:16
#7 0x55a92ccb5c51 in DB::executeActionForPartialResult(DB::ActionsDAG::Node const*, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>>, unsigned long) build_docker/./src/Interpreters/ActionsDAG.cpp:618:49
#8 0x55a92ccb5c51 in DB::ActionsDAG::evaluatePartialResult(std::__1::unordered_map<DB::ActionsDAG::Node const*, DB::ColumnWithTypeAndName, std::__1::hash<DB::ActionsDAG::Node const*>, std::__1::equal_to<DB::ActionsDAG::Node const*>, std::__1::allocator<std::__1::pair<DB::ActionsDAG::Node const* const, DB::ColumnWithTypeAndName>>>&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*>> const&, unsigned long, bool)
> build_docker/./src/Interpreters/ActionsDAG.cpp:785:48
#9 0x55a92ccb1109 in DB::ActionsDAG::updateHeader(DB::Block) const build_docker/./src/Interpreters/ActionsDAG.cpp:695:26
#10 0x55a9344cf962 in DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:8:23
#11 0x55a934a79def in DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:31:9
#12 0x55a92ee6b819 in std::__1::__unique_if<DB::ExpressionStep>::__unique_single std::__1::make_unique[abi:v15000]<DB::ExpressionStep, DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&>(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#13 0x55a92ee6b819 in DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2782:28
#14 0x55a92ee492b5 in DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1781:21
#15 0x55a92ee424a8 in DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:927:5
#16 0x55a92f0c1292 in DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:304:38
#17 0x55a92f0c3c0b in DB::InterpreterSelectWithUnionQuery::execute() build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:378:5
#18 0x55a92fce5f44 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:1102:40
#19 0x55a92fcd87e2 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1281:26
#20 0x55a933962b3e in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:520:54
#21 0x55a9339b705b in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2314:9
#22 0x55a937870dbf in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#23 0x55a937871c41 in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#24 0x55a937cf9065 in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#25 0x55a937cf5e2d in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#26 0x55a937cf2d11 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#27 0x7f41e9ba1ac2 in start_thread nptl/pthread_create.c:442:8
#28 0x7f41e9c3384f misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Uninitialized value was created by a heap allocation
#0 0x55a8ff4496c2 in malloc (/workspace/clickhouse+0x7ef46c2) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#1 0x55a9175863ba in void* (anonymous namespace)::allocNoTrack<false, false>(unsigned long, unsigned long) build_docker/./src/Common/Allocator.cpp:68:19
#2 0x55a9175863ba in Allocator<false, false>::alloc(unsigned long, unsigned long) build_docker/./src/Common/Allocator.cpp:115:18
#3 0x55a8ff4f717e in void DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::reserveForNextSize<>() (/workspace/clickhouse+0x7fa217e) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#4 0x55a93071b722 in void DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul>::push_back<unsigned long const&>(unsigned long const&) build_docker/./src/Common/PODArray.h:423:19
#5 0x55a93071b722 in DB::ColumnString::insert(DB::Field const&) build_docker/./src/Columns/ColumnString.h:128:17
#6 0x55a92c0c4067 in DB::IDataType::createColumnConst(unsigned long, DB::Field const&) const build_docker/./src/DataTypes/IDataType.cpp:60:13
#7 0x55a92d4283c4 in DB::ActionsMatcher::visit(DB::ASTLiteral const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:1367:27
#8 0x55a92d4085ff in DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:705:9
#9 0x55a92d412b90 in DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:1228:17
#10 0x55a92d408634 in DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:703:9
#11 0x55a92d4295db in DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp
#12 0x55a92d4087b4 in DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:707:9
#13 0x55a92d3e7054 in DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::doVisit(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:71:13
#14 0x55a92d362054 in void DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:61:9
#15 0x55a92d362054 in void DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visitImpl<false>(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:51:13
#16 0x55a92d362054 in DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:32:13
#17 0x55a92d362054 in DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) build_docker/./src/Interpreters/ExpressionAnalyzer.cpp:484:48
#18 0x55a92d38cf71 in DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) build_docker/./src/Interpreters/ExpressionAnalyzer.cpp:1511:5
#19 0x55a92d3a0fbd in DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) build_docker/./src/Interpreters/ExpressionAnalyzer.cpp:2067:24
#20 0x55a92ee4f78a in DB::InterpreterSelectQuery::getSampleBlockImpl() build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1002:23
#21 0x55a92ee2f6d9 in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::
> allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:777:25
#22 0x55a92ee1c111 in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::
> allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:780:5
#23 0x55a92ee0f246 in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:
> 210:7
#24 0x55a92f0bd1f8 in std::__1::__unique_if<DB::InterpreterSelectQuery>::__unique_single std::__1::make_unique[abi:v15000]<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>
> const&>(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#25 0x55a92f0bd1f8 in DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:255:16
SUMMARY: MemorySanitizer: use-of-uninitialized-value (/workspace/clickhouse+0x7fef36b) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee) in DB::FunctionSQLJSONHelpers::Executor<DB::NameJSONValue, DB::JSONValueImpl<DB::SimdJSONParser, DB::JSONStringSerializer<DB::SimdJSONParser::Element, DB::SimdJSONElementFormatter>>, DB::SimdJSONParser>::run(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::
> shared_ptr<DB::IDataType const> const&, unsigned long, unsigned int, std::__1::shared_ptr<DB::Context const> const&)
Uninitialized bytes in write at offset 0 inside [0x7f40f12ed380, 392)
==238==WARNING: MemorySanitizer: use-of-uninitialized-value
2024.02.14 10:55:52.842564 [ 239 ] {} <Trace> BaseDaemon: Received signal -3
#0 0x55a9177f6085 in DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl() build_docker/./src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16:23
#1 0x55a917fdcfc9 in DB::WriteBuffer::next() build_docker/./src/IO/WriteBuffer.h:53:13
#2 0x55a917fd5558 in sanitizerDeathCallback() build_docker/./src/Daemon/BaseDaemon.cpp:568:9
#3 0x55a8ff428b55 in __sanitizer::Die() crtstuff.c
#4 0x55a8ff43a0b2 in __msan_warning_with_origin_noreturn (/workspace/clickhouse+0x7ee50b2) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#5 0x55a8ff54436b in DB::FunctionSQLJSONHelpers::Executor<DB::NameJSONValue, DB::JSONValueImpl<DB::SimdJSONParser, DB::JSONStringSerializer<DB::SimdJSONParser::Element, DB::SimdJSONElementFormatter>>, DB::SimdJSONParser>::run(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, unsigned int, std::__1::shared_ptr<DB::Context const> const&) (/workspace/
> clickhouse+0x7fef36b) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#6 0x55a8ff541454 in DB::FunctionSQLJSON<DB::NameJSONValue, DB::JSONValueImpl>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7fec454) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#7 0x55a8ff4bbc21 in DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66c21) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#8 0x55a8ff4bb557 in DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66557) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#9 0x55a9290d7a59 in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:246:15
#10 0x55a9290dbede in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:281:24
#11 0x55a9290e00e5 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:378:16
#12 0x55a92ccb5c51 in DB::executeActionForPartialResult(DB::ActionsDAG::Node const*, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>>, unsigned long) build_docker/./src/Interpreters/ActionsDAG.cpp:618:49
#13 0x55a92ccb5c51 in DB::ActionsDAG::evaluatePartialResult(std::__1::unordered_map<DB::ActionsDAG::Node const*, DB::ColumnWithTypeAndName, std::__1::hash<DB::ActionsDAG::Node const*>, std::__1::equal_to<DB::ActionsDAG::Node const*>, std::__1::allocator<std::__1::pair<DB::ActionsDAG::Node const* const, DB::ColumnWithTypeAndName>>>&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*>> const&, unsigned long, bool)
> build_docker/./src/Interpreters/ActionsDAG.cpp:785:48
#14 0x55a92ccb1109 in DB::ActionsDAG::updateHeader(DB::Block) const build_docker/./src/Interpreters/ActionsDAG.cpp:695:26
#15 0x55a9344cf962 in DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:8:23
#16 0x55a934a79def in DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:31:9
#17 0x55a92ee6b819 in std::__1::__unique_if<DB::ExpressionStep>::__unique_single std::__1::make_unique[abi:v15000]<DB::ExpressionStep, DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&>(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#18 0x55a92ee6b819 in DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2782:28
#19 0x55a92ee492b5 in DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1781:21
#20 0x55a92ee424a8 in DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:927:5
#21 0x55a92f0c1292 in DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:304:38
#22 0x55a92f0c3c0b in DB::InterpreterSelectWithUnionQuery::execute() build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:378:5
#23 0x55a92fce5f44 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:1102:40
#24 0x55a92fcd87e2 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1281:26
#25 0x55a933962b3e in DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:520:54
#26 0x55a9339b705b in DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2314:9
#27 0x55a937870dbf in Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3
#28 0x55a937871c41 in Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20
#29 0x55a937cf9065 in Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14
#30 0x55a937cf5e2d in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11
#31 0x55a937cf2d11 in Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27
#32 0x7f41e9ba1ac2 in start_thread nptl/pthread_create.c:442:8
#33 0x7f41e9c3384f misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Uninitialized value was stored to memory at
#0 0x55a8ff44018a in __msan_memcpy (/workspace/clickhouse+0x7eeb18a) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#1 0x55a8ff4f4727 in DB::WriteBuffer::write(char const*, unsigned long) (/workspace/clickhouse+0x7f9f727) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#2 0x55a917fdcf16 in void DB::writePODBinary<int>(int const&, DB::WriteBuffer&) build_docker/./src/IO/WriteHelpers.h:88:9
#3 0x55a917fdcf16 in void DB::writeBinary<int>(int const&, DB::WriteBuffer&) build_docker/./src/IO/WriteHelpers.h:1031:59
#4 0x55a917fd5505 in sanitizerDeathCallback() build_docker/./src/Daemon/BaseDaemon.cpp:563:5
#5 0x55a8ff428b55 in __sanitizer::Die() crtstuff.c
#6 0x55a8ff54436b in DB::FunctionSQLJSONHelpers::Executor<DB::NameJSONValue, DB::JSONValueImpl<DB::SimdJSONParser, DB::JSONStringSerializer<DB::SimdJSONParser::Element, DB::SimdJSONElementFormatter>>, DB::SimdJSONParser>::run(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, unsigned int, std::__1::shared_ptr<DB::Context const> const&) (/workspace/
> clickhouse+0x7fef36b) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#7 0x55a8ff541454 in DB::FunctionSQLJSON<DB::NameJSONValue, DB::JSONValueImpl>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7fec454) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#8 0x55a8ff4bbc21 in DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66c21) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#9 0x55a8ff4bb557 in DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66557) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#10 0x55a9290d7a59 in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:246:15
#11 0x55a9290dbede in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:281:24
#12 0x55a9290e00e5 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:378:16
#13 0x55a92ccb5c51 in DB::executeActionForPartialResult(DB::ActionsDAG::Node const*, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>>, unsigned long) build_docker/./src/Interpreters/ActionsDAG.cpp:618:49
#14 0x55a92ccb5c51 in DB::ActionsDAG::evaluatePartialResult(std::__1::unordered_map<DB::ActionsDAG::Node const*, DB::ColumnWithTypeAndName, std::__1::hash<DB::ActionsDAG::Node const*>, std::__1::equal_to<DB::ActionsDAG::Node const*>, std::__1::allocator<std::__1::pair<DB::ActionsDAG::Node const* const, DB::ColumnWithTypeAndName>>>&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*>> const&, unsigned long, bool)
> build_docker/./src/Interpreters/ActionsDAG.cpp:785:48
#15 0x55a92ccb1109 in DB::ActionsDAG::updateHeader(DB::Block) const build_docker/./src/Interpreters/ActionsDAG.cpp:695:26
#16 0x55a9344cf962 in DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:8:23
#17 0x55a934a79def in DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:31:9
#18 0x55a92ee6b819 in std::__1::__unique_if<DB::ExpressionStep>::__unique_single std::__1::make_unique[abi:v15000]<DB::ExpressionStep, DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&>(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#19 0x55a92ee6b819 in DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2782:28
#20 0x55a92ee492b5 in DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1781:21
#21 0x55a92ee424a8 in DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:927:5
#22 0x55a92f0c1292 in DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:304:38
Member fields were destroyed
#0 0x55a8ff449add in __sanitizer_dtor_callback_fields (/workspace/clickhouse+0x7ef4add) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#1 0x55a93593b4a1 in std::__1::vector<DB::Token, std::__1::allocator<DB::Token>>::_ConstructTransaction::~_ConstructTransaction() build_docker/./contrib/llvm-project/libcxx/include/vector:795:13
#2 0x55a93593b4a1 in std::__1::vector<DB::Token, std::__1::allocator<DB::Token>>::_ConstructTransaction::~_ConstructTransaction() build_docker/./contrib/llvm-project/libcxx/include/vector:793:5
#3 0x55a93593b4a1 in void std::__1::vector<DB::Token, std::__1::allocator<DB::Token>>::__construct_one_at_end[abi:v15000]<DB::Token>(DB::Token&&) build_docker/./contrib/llvm-project/libcxx/include/vector:811:3
#4 0x55a93593b4a1 in DB::Token& std::__1::vector<DB::Token, std::__1::allocator<DB::Token>>::emplace_back<DB::Token>(DB::Token&&) build_docker/./contrib/llvm-project/libcxx/include/vector:1597:9
#5 0x55a93593b4a1 in DB::Tokens::Tokens(char const*, char const*, unsigned long, bool) build_docker/./src/Parsers/TokenIterator.cpp:17:18
#6 0x55a8ff542f29 in DB::FunctionSQLJSONHelpers::Executor<DB::NameJSONValue, DB::JSONValueImpl<DB::SimdJSONParser, DB::JSONStringSerializer<DB::SimdJSONParser::Element, DB::SimdJSONElementFormatter>>, DB::SimdJSONParser>::run(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, unsigned int, std::__1::shared_ptr<DB::Context const> const&) (/workspace/
> clickhouse+0x7fedf29) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#7 0x55a8ff541454 in DB::FunctionSQLJSON<DB::NameJSONValue, DB::JSONValueImpl>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7fec454) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#8 0x55a8ff4bbc21 in DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66c21) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#9 0x55a8ff4bb557 in DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x7f66557) (BuildId: 749b96b7a6f73e2be4b82d394b9a94915718fbee)
#10 0x55a9290d7a59 in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:246:15
#11 0x55a9290dbede in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:281:24
#12 0x55a9290e00e5 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/./src/Functions/IFunction.cpp:378:16
#13 0x55a92ccb5c51 in DB::executeActionForPartialResult(DB::ActionsDAG::Node const*, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>>, unsigned long) build_docker/./src/Interpreters/ActionsDAG.cpp:618:49
#14 0x55a92ccb5c51 in DB::ActionsDAG::evaluatePartialResult(std::__1::unordered_map<DB::ActionsDAG::Node const*, DB::ColumnWithTypeAndName, std::__1::hash<DB::ActionsDAG::Node const*>, std::__1::equal_to<DB::ActionsDAG::Node const*>, std::__1::allocator<std::__1::pair<DB::ActionsDAG::Node const* const, DB::ColumnWithTypeAndName>>>&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*>> const&, unsigned long, bool)
> build_docker/./src/Interpreters/ActionsDAG.cpp:785:48
#15 0x55a92ccb1109 in DB::ActionsDAG::updateHeader(DB::Block) const build_docker/./src/Interpreters/ActionsDAG.cpp:695:26
#16 0x55a9344cf962 in DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:8:23
#17 0x55a934a79def in DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./src/Processors/QueryPlan/ExpressionStep.cpp:31:9
#18 0x55a92ee6b819 in std::__1::__unique_if<DB::ExpressionStep>::__unique_single std::__1::make_unique[abi:v15000]<DB::ExpressionStep, DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&>(DB::DataStream const&, std::__1::shared_ptr<DB::ActionsDAG> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#19 0x55a92ee6b819 in DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::__1::shared_ptr<DB::ActionsDAG> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:2782:28
#20 0x55a92ee492b5 in DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::__1::optional<DB::Pipe>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:1781:21
#21 0x55a92ee424a8 in DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:927:5
#22 0x55a92f0c1292 in DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:304:38
#23 0x55a92f0c3c0b in DB::InterpreterSelectWithUnionQuery::execute() build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:378:5
#24 0x55a92fce5f44 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:1102:40
#25 0x55a92fcd87e2 in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1281:26
SUMMARY: MemorySanitizer: use-of-uninitialized-value build_docker/./src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16:23 in DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl()
``` | https://github.com/ClickHouse/ClickHouse/issues/60017 | https://github.com/ClickHouse/ClickHouse/pull/60738 | 558cde5e80b3cd4d3b9033e0980841758ff7159e | bc0cf836685689843f36b1647e8d76e83cceb441 | "2024-02-15T09:56:03Z" | c++ | "2024-03-04T15:57:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,891 | ["tests/queries/0_stateless/03002_analyzer_prewhere.reference", "tests/queries/0_stateless/03002_analyzer_prewhere.sql"] | Analyzer: Invalid move to PREWHERE optimization. Cannot find column __table1.j in output | **Describe the bug**
https://s3.amazonaws.com/clickhouse-test-reports/59121/e522e23ce86b92c0b9765687cda8153a1f8ccd42/ast_fuzzer__asan_.html
**How to reproduce**
```sql
SET max_threads = 16, receive_timeout = 10., receive_data_timeout_ms = 10000, allow_suspicious_low_cardinality_types = true, enable_positional_arguments = false, log_queries = true, table_function_remote_max_addresses = 200, any_join_distinct_right_table_keys = true, joined_subquery_requires_alias = false, allow_experimental_analyzer = true, max_rows_to_read = 3, max_execution_time = 10., max_memory_usage = 10000000000, log_comment = '/workspace/ch/tests/queries/0_stateless/01710_projection_in_index.sql', send_logs_level = 'fatal', enable_optimize_predicate_expression = false, prefer_localhost_replica = true, allow_introspection_functions = true, optimize_functions_to_subcolumns = false, transform_null_in = true, optimize_use_projections = true, allow_deprecated_syntax_for_merge_tree = true, parallelize_output_from_storages = false;
CREATE TABLE t__fuzz_0 (`i` Int32, `j` Nullable(Int32), `k` Int32, PROJECTION p (SELECT * ORDER BY j)) ENGINE = MergeTree ORDER BY i SETTINGS index_granularity = 1;
INSERT INTO t__fuzz_0 SELECT * FROM generateRandom() LIMIT 3;
INSERT INTO t__fuzz_0 SELECT * FROM generateRandom() LIMIT 3;
INSERT INTO t__fuzz_0 SELECT * FROM generateRandom() LIMIT 3;
INSERT INTO t__fuzz_0 SELECT * FROM generateRandom() LIMIT 3;
INSERT INTO t__fuzz_0 SELECT * FROM generateRandom() LIMIT 3;
SELECT * FROM t__fuzz_0 PREWHERE (i < 5) AND (j IN (1, 2)) WHERE i < 5;
``` | https://github.com/ClickHouse/ClickHouse/issues/59891 | https://github.com/ClickHouse/ClickHouse/pull/60657 | 7bfdc3d6db3d521fe388e8d276e8951ffb7c6b25 | 9c4b15c1c104c94ddfd5fae453e45af86b5e4587 | "2024-02-12T13:56:30Z" | c++ | "2024-03-03T17:25:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,749 | ["docs/en/sql-reference/functions/string-search-functions.md", "src/Functions/countMatches.h", "tests/queries/0_stateless/01595_countMatches.reference", "tests/queries/0_stateless/01595_countMatches.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Logical error in `countMatches` found by AST Fuzzer | https://s3.amazonaws.com/clickhouse-test-reports/59563/a740fc7835ff54bda95c322d9d19e31349144d8e/ast_fuzzer__tsan_.html
```sql
SELECT (39.08935546874999, 57.25528054528888), countMatches(toFixedString(materialize('foo'), 3), toFixedString(toFixedString('[f]{0}', 6), 6)) GROUP BY jumpConsistentHash(42, 57)
```
```
[...]
2024.02.07 18:10:03.407721 [ 1213 ] {} <Fatal> BaseDaemon: 12. DB::FunctionCountMatches<(anonymous namespace)::FunctionCountMatchesCaseSensitive>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000092fd4d6 in /workspace/clickhouse
2024.02.07 18:10:07.835696 [ 1213 ] {} <Fatal> BaseDaemon: 13. DB::IFunction::executeImplDryRun(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x0000000007453304 in /workspace/clickhouse
2024.02.07 18:10:12.281175 [ 1213 ] {} <Fatal> BaseDaemon: 14. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x0000000007452bb1 in /workspace/clickhouse
2024.02.07 18:10:12.324003 [ 1213 ] {} <Fatal> BaseDaemon: 15. ./build_docker/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000016671fe7 in /workspace/clickhouse
[...]
``` | https://github.com/ClickHouse/ClickHouse/issues/59749 | https://github.com/ClickHouse/ClickHouse/pull/59752 | 0b0b273157d4ee322e36475d644be3c95c0c0bac | ddaef8d34280cb809c6fda0625b02266b84a23c4 | "2024-02-08T10:18:10Z" | c++ | "2024-02-15T09:02:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,729 | ["docs/en/sql-reference/aggregate-functions/reference/simplelinearregression.md"] | Parameter names description in simpleLinearRegression may be incorrect | **Describe the issue**
Hi π
In the documentation for simpleLinearRegression function the parameter names a little confusing to me. From the documentation:
`simpleLinearRegression(x, y)`
```
Parameters:
x β Column with dependent variable values.
y β Column with explanatory variable values.
```
This suggests that `x` is explained by `y` since `x` is described as the "dependent" variable and `y` is described as "explanatory" variable. However, the documentation further goes on to say:
```
Returned values:
Constants (a, b) of the resulting line y = a*x + b.
```
It feels a bit contradictory because in the above equation, `y` is being explained by `x`.
Should the documentation read the following?:
```
Parameters:
x β Column with explanatory variable values.
y β Column with dependent variable values.
```
| https://github.com/ClickHouse/ClickHouse/issues/59729 | https://github.com/ClickHouse/ClickHouse/pull/60235 | 33cd7c25b10d9137bbc8cf9316cf7c4182708e5b | 5d6bc6f566f09d1cd2ecf414bbb74222ffffa6bd | "2024-02-07T17:08:51Z" | c++ | "2024-02-21T16:47:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,727 | ["src/Storages/MergeTree/MergeTreeIndexMinMax.cpp", "tests/queries/0_stateless/02985_minmax_index_aggregate_function.reference", "tests/queries/0_stateless/02985_minmax_index_aggregate_function.sql"] | Minmax index on aggregate function breaks merges for table | > Please make sure that the version you're using is still supported
23.10.5.20
> You have to provide the following information whenever possible.
**Describe what's wrong**
minmax index on aggregate function causes broken merges
table column
```sql
`avg_time` AggregateFunction(avgWeighted, Int64, UInt64) CODEC(ZSTD(1)),
```
index on table
```
INDEX idx_avg_time avg_time TYPE minmax GRANULARITY 1,
```
error seen in system.merge_log causing merge failures
```
Code: 43. DB::Exception: Operator < is not implemented for AggregateFunctionStateData. (ILLEGAL_TYPE_OF_ARGUMENT), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000cdce617 in /usr/bin/clickhouse
1. DB::Exception::Exception<char const (&) [62]>(int, char const (&) [62]) @ 0x0000000007b746e0 in /usr/bin/clickhouse
2. auto DB::Field::dispatch<auto auto DB::applyVisitor<DB::FieldVisitorAccurateLess, DB::FieldRef&, DB::FieldRef&>(DB::FieldVisitorAccurateLess&&, DB::FieldRef&, DB::FieldRef&)::'lambda'(DB::FieldVisitorAccurateLess&)::operator()<DB::AggregateFunctionStateData>(DB::FieldVisitorAccurateLess&) const::'lambda'(DB::FieldVisitorAccurateLess&), DB::FieldRef&>(DB::FieldVisitorAccurateLess&&, DB::FieldRef&) @ 0x00000000134d7e04 in /usr/bin/clickhouse
3. auto DB::Field::dispatch<auto DB::applyVisitor<DB::FieldVisitorAccurateLess, DB::FieldRef&, DB::FieldRef&>(DB::FieldVisitorAccurateLess&&, DB::FieldRef&, DB::FieldRef&)::'lambda'(DB::FieldVisitorAccurateLess&), DB::FieldRef&>(DB::FieldVisitorAccurateLess&&, DB::FieldRef&) @ 0x00000000134d20bd in /usr/bin/clickhouse
4. DB::MergeTreeIndexAggregatorMinMax::update(DB::Block const&, unsigned long*, unsigned long) @ 0x0000000013664915 in /usr/bin/clickhouse
5. DB::MergeTreeDataPartWriterOnDisk::calculateAndSerializeSkipIndices(DB::Block const&, std::vector<DB::Granule, std::allocator<DB::Granule>> const&) @ 0x0000000013611be9 in /usr/bin/clickhouse
6. DB::MergeTreeDataPartWriterWide::write(DB::Block const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) @ 0x0000000013615367 in /usr/bin/clickhouse
7. DB::MergedBlockOutputStream::write(DB::Block const&) @ 0x000000001375db64 in /usr/bin/clickhouse
8. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x00000000135219f0 in /usr/bin/clickhouse
9. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x000000001352154b in /usr/bin/clickhouse
10. DB::MergeTask::execute() @ 0x00000000135268f9 in /usr/bin/clickhouse
11. DB::ReplicatedMergeMutateTaskBase::executeStep() @ 0x00000000137c61ce in /usr/bin/clickhouse
12. DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x000000001353855b in /usr/bin/clickhouse
13. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000ceb7abf in /usr/bin/clickhouse
14. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000cebb5dc in /usr/bin/clickhouse
15. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000ceb9e07 in /usr/bin/clickhouse
16. ? @ 0x00007f2a86e41609 in ?
17. ? @ 0x00007f2a86d66133 in ?
```
| https://github.com/ClickHouse/ClickHouse/issues/59727 | https://github.com/ClickHouse/ClickHouse/pull/59733 | 69e118e58734aa822f86d33c3596310509af3c42 | b4449535c28527488b47689dcead205523ea9f01 | "2024-02-07T15:24:07Z" | c++ | "2024-02-09T18:39:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,625 | ["src/Interpreters/HashJoin.cpp"] | Is it wrong in `joinRightColumnsSwitchNullability` ? | > Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in [community Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-1gh9ds7f4-PgDhJAaF8ad5RbWBAAjzFg) is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
> If you still prefer GitHub issues, remove all this text and ask your question here.
In `joinRightColumnsSwitchNullability`
```C++
template <JoinKind KIND, JoinStrictness STRICTNESS, typename KeyGetter, typename Map>
size_t joinRightColumnsSwitchNullability(
std::vector<KeyGetter> && key_getter_vector,
const std::vector<const Map *> & mapv,
AddedColumns & added_columns,
JoinStuff::JoinUsedFlags & used_flags)
{
if (added_columns.need_filter)
{
return joinRightColumnsSwitchMultipleDisjuncts<KIND, STRICTNESS, KeyGetter, Map, true>(std::forward<std::vector<KeyGetter>>(key_getter_vector), mapv, added_columns, used_flags);
}
else
{
return joinRightColumnsSwitchMultipleDisjuncts<KIND, STRICTNESS, KeyGetter, Map, true>(std::forward<std::vector<KeyGetter>>(key_getter_vector), mapv, added_columns, used_flags);
}
}
```
The two branches are the same, is it something wrong? | https://github.com/ClickHouse/ClickHouse/issues/59625 | https://github.com/ClickHouse/ClickHouse/pull/60259 | e1e92e5547333ea5e319e3ad5a8efd640b55f743 | f07f438fe68f6205762b0ff93f315eaa413d33c1 | "2024-02-06T06:36:40Z" | c++ | "2024-03-04T09:32:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,437 | ["tests/queries/0_stateless/02996_analyzer_prewhere_projection.reference", "tests/queries/0_stateless/02996_analyzer_prewhere_projection.sql"] | Fuzzer assertion `Logical error: 'Invalid move to PREWHERE optimization. Cannot find column k in output'.` | (you don't have to strictly follow this form)
**Describe the bug**
A link to the report https://s3.amazonaws.com/clickhouse-test-reports/59337/74aa3d638ef54c40c5fa617abe22413eda323c8e/ast_fuzzer__asan_/report.html
**How to reproduce**
Found in https://github.com/ClickHouse/ClickHouse/pull/59337 | https://github.com/ClickHouse/ClickHouse/issues/59437 | https://github.com/ClickHouse/ClickHouse/pull/60191 | 946c2855c44e832aa25a634e16bf7c668036e007 | 9c5487c7d276c75135ef8297ab183a0b6ef1249b | "2024-01-31T15:36:58Z" | c++ | "2024-02-21T09:45:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,418 | ["src/Processors/QueryPlan/ReadFromSystemNumbersStep.cpp", "tests/integration/test_storage_numbers/test.py"] | numbers(xx) read more rows, than it was before | **Describe the situation**
Just not to forget.
https://github.com/ClickHouse/ClickHouse/pull/50909#issuecomment-1869094989
**How to reproduce**
https://fiddle.clickhouse.com/87861a7b-30c4-45c2-862e-07cdfb63588d
23.11
```
"rows_before_limit_at_least": 65409,
"statistics":
{
"elapsed": 0.002581122,
"rows_read": 65409,
"bytes_read": 523272
}
```
https://fiddle.clickhouse.com/29e9744f-0726-4f2c-97c7-4a8362eac102
23.10
```
"rows_before_limit_at_least": 3,
"statistics":
{
"elapsed": 0.002170152,
"rows_read": 3,
"bytes_read": 24
}
```
| https://github.com/ClickHouse/ClickHouse/issues/59418 | https://github.com/ClickHouse/ClickHouse/pull/60546 | 5e597228d752fce13c554214359586eb2990524a | 9e7894d8cbf4ea08657326083cf677699ed0be12 | "2024-01-31T10:55:16Z" | c++ | "2024-03-07T08:38:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,342 | ["src/Storages/System/StorageSystemTables.cpp", "tests/queries/0_stateless/02841_not_ready_set_bug.sh"] | Not-ready Set is passed as the second argument for function 'in' | ```
SELECT *
FROM information_schema.TABLES
WHERE exists((
SELECT 1
))
```
```
Received exception from server (version 23.12.2):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Not-ready Set is passed as the second argument for function 'in': while executing 'FUNCTION in(1 :: 0, _subquery2 :: 1) -> in(1, _subquery2) UInt8 : 4'. (LOGICAL_ERROR)
```
https://fiddle.clickhouse.com/0a709c49-4530-40f3-8b31-7afb706b723b | https://github.com/ClickHouse/ClickHouse/issues/59342 | https://github.com/ClickHouse/ClickHouse/pull/59351 | 2a1cbc2db4796c074bcb4fd82ab79d82f47de5b7 | de341a6d6ec26a580b6a4910861f6099b1de9baa | "2024-01-29T15:43:40Z" | c++ | "2024-01-30T02:59:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,165 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/02972_insert_deduplication_token_hierarchical_inserts.reference", "tests/queries/0_stateless/02972_insert_deduplication_token_hierarchical_inserts.sql"] | MV deduplication not working as expected when passing `insert_deduplication_token` | We are trying to emulate transactions (or to be able to have idempotent inserts) by using deduplication so we can safely retry inserts on failure. We can generate a static block of rows and assign a `$insertion_id` to each one. Having that set up and passing the following settings, we can retry inserts that have partially written data in any table in the data flow.
- insert_deduplicate = 1
- deduplicate_blocks_in_dependent_materialized_views = 1
- insert_deduplication_token = $insertion_id (automatically generated for each block of rows we ingest)
This has been working fine but we have encountered a corner case. Note that the following setup is a bit special since a landing table gets split into various MVs and reunited again into another table.
This is the schematics of the setup:
```
landing -β¬--> mv_1_1 ---> ds_1_1 ---> mv_2_1 --β¬-> ds_2_1 ---> mv_2_2 ---> ds_3_1
| |
β--> mv_1_2 ---> ds_1_2 ---> mv_2_2 --β
```
And here is a script to reproduce the setup and the issue we are seeing: https://pastila.nl/?003e3b86/1785bacbca04dd3ef0c91f4e115b1e6f#e9/2/RQDxlV+QARpP3tw8g==
If you run the script you can see the following output from `system.part_log`:
```sql
ββquery_idβββββββ¬ββββββββββevent_timeββ¬βdatabaseββ¬βtableββββ¬βnameβββββββ¬βerrorββ
β first_insert β 2024-01-24 12:10:06 β dedup β landing β all_0_0_0 β 0 β
β first_insert β 2024-01-24 12:10:06 β dedup β landing β all_1_1_0 β 0 β
β first_insert β 2024-01-24 12:10:06 β dedup β ds_1_2 β all_0_0_0 β 0 β
β first_insert β 2024-01-24 12:10:06 β dedup β ds_1_1 β all_0_0_0 β 0 β
β first_insert β 2024-01-24 12:10:06 β dedup β ds_2_1 β all_0_0_0 β 0 β
β first_insert β 2024-01-24 12:10:06 β dedup β ds_2_1 β all_2_2_0 β 389 β # <= Shouldn't be deduplicated
β first_insert β 2024-01-24 12:10:06 β dedup β ds_3_1 β all_0_0_0 β 0 β
β first_insert β 2024-01-24 12:10:06 β dedup β ds_3_1 β all_2_2_0 β 389 β # <= Shouldn't be deduplicated
β second_insert β 2024-01-24 12:10:06 β dedup β landing β all_3_3_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β landing β all_4_4_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β ds_1_2 β all_2_2_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β ds_1_1 β all_2_2_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β ds_2_1 β all_3_3_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β ds_2_1 β all_4_4_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β ds_3_1 β all_3_3_0 β 389 β
β second_insert β 2024-01-24 12:10:06 β dedup β ds_3_1 β all_4_4_0 β 389 β
βββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββ΄ββββββββββ΄ββββββββββββ΄ββββββββ
```
1. first insert: We set a custom deduplication token and all inserts until `ds_2_1` are properly ingested (no deduplication). The issue starts at `ds_2_1` since it's receiving the "same block" in two different inserts (through `mv_2_1` and `mv_2_2`) , and the second one gets deduplicated. The deduplication then propagates to `ds_3_1` and any other DS downstream.
2. second insert: Everything gets correctly deduplicated. Something expected
I'd expect ClickHouse to handle this despite technically being the same ingested block. **Could this be considered a bug, or is it expected behavior?** How is the internal deduplication being executed? Is it not considering that they are two different parts coming from different MVs?
A possible solution could be to stop sending our custom token and rely on ClickHouse generated hashes using the data, but we see two possible drawbacks:
* If a MV is not deterministic (e.g. it's using `now()`) it will always generate different output, and retries are not going to get deduplicated
* If a MV generates the same output for two different inserts (e.g. MV doing only a `count()` and two inserts ingesting 1k different rows), the second one is going to be deduplicated when it shouldn't
We are aware of initiatives like microtransactions (https://github.com/ClickHouse/ClickHouse/issues/57815) but we wonder if there is something we can do meanwhile to fix this situation. | https://github.com/ClickHouse/ClickHouse/issues/59165 | https://github.com/ClickHouse/ClickHouse/pull/59238 | bd83830ceafe2637279f61a482077189b83ba25c | a51aa7b6683ec58d93aab797c1a71d5ebb2460fd | "2024-01-24T14:49:54Z" | c++ | "2024-01-29T11:52:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,126 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02972_to_string_nullable_timezone.reference", "tests/queries/0_stateless/02972_to_string_nullable_timezone.sql"] | crash in toString(ts, Nullable('UTC')) with nullable timezone |
**How to reproduce**
```sql
SELECT toString(now(), materialize(CAST('UTC', 'Nullable(String)')))
Query id: 496dc875-ed87-4f84-808e-03f3413a2774
[1] 8652 abort clickhouse-local -mn
```
| https://github.com/ClickHouse/ClickHouse/issues/59126 | https://github.com/ClickHouse/ClickHouse/pull/59190 | 10d4f1fd87520b7fedd9c78d5523853d42b83bf0 | 0cbadc9b52f5e728a397b3081c6887c0f4213856 | "2024-01-23T15:22:12Z" | c++ | "2024-01-26T12:33:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 59,039 | ["src/Storages/MergeTree/MutateTask.cpp", "tests/queries/0_stateless/02346_inverted_index_bug59039.reference", "tests/queries/0_stateless/02346_inverted_index_bug59039.sql"] | Can't remove index files from datapart using the drop index statement. | **Describe what's wrong**
Can't remove index files from `datapart` using the `drop index` statement.
Theoretically, the `alter drop index` statement should trigger a mutation, which in the new `datapart` would remove the related `skip-index` files. However, when I delete `inverted` indexes from the table, I found that they are not deleted in the datapart (the inverted index file extension is `.gin_xx`).
**Does it reproduce on recent release?**
Yes, using release date `2024-01-05` can reproduce this bug.
image: `clickhouse/clickhouse-server:latest`
Digest: `sha256:2ff5796c67e8d588273a5f3f84184b9cdaa39a324bcf74abd3652d818d755f8c`
**How to reproduce**
* Which ClickHouse server version to use
I'm using the latest `clickhouse-server`
```bash
β― docker pull clickhouse/clickhouse-server:latest
latest: Pulling from clickhouse/clickhouse-server
Digest: sha256:2ff5796c67e8d588273a5f3f84184b9cdaa39a324bcf74abd3652d818d755f8c
Status: Image is up to date for clickhouse/clickhouse-server:latest
docker.io/clickhouse/clickhouse-server:latest
```
Here is my `docker-compose.yaml`, you can run `docker compose up` to start it.
```yaml
version: '3.7'
services:
clickhouse:
image: clickhouse/clickhouse-server:latest
ports:
- '8123:8123'
- '9000:9000'
- '8998:8998'
- '9363:9363'
- '9116:9116'
ulimits:
memlock: -1
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/data:/var/lib/clickhouse
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/log:/var/log/clickhouse-server
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/config:/etc/clickhouse-server
```
* Queries to run that lead to unexpected result
_Step1. create table for test_
```sql
DROP TABLE IF EXISTS simple_table_curd SYNC;
CREATE TABLE simple_table_curd
(
`id` UInt64,
`doc` String,
INDEX text_idx doc TYPE inverted GRANULARITY 1
)
ENGINE = MergeTree
ORDER BY id
SETTINGS index_granularity = 2, index_granularity_bytes = '10Mi', min_bytes_for_wide_part = 0, min_rows_for_wide_part = 0;
```
_Step2. insert some sample data_
```sql
INSERT INTO simple_table_curd VALUES (0, 'Ancient empires rise and fall, shaping history''s course.'),(1, 'Artistic expressions reflect diverse cultural heritages.'),(2, 'Social movements transform societies, forging new paths.'),(3, 'Economies fluctuate, reflecting the complex interplay of global forces.'),(4, 'Strategic military campaigns alter the balance of power.'),(5, 'Quantum leaps redefine understanding of physical laws.'),(6, 'Chemical reactions unlock mysteries of nature.'), (7, 'Philosophical debates ponder the essence of existence.'),(8, 'Marriages blend traditions, celebrating love''s union.'),(9, 'Explorers discover uncharted territories, expanding world maps.'),(10, 'Innovations in technology drive societal progress.'),(11, 'Environmental conservation efforts protect Earth''s biodiversity.'),(12, 'Diplomatic negotiations seek to resolve international conflicts.'),(13, 'Ancient philosophies provide wisdom for modern dilemmas.'),(14, 'Economic theories debate the merits of market systems.'),(15, 'Military strategies evolve with technological advancements.'),(16, 'Physics theories delve into the universe''s mysteries.'),(17, 'Chemical compounds play crucial roles in medical breakthroughs.'),(18, 'Philosophers debate ethics in the age of artificial intelligence.'),(19, 'Wedding ceremonies across cultures symbolize lifelong commitment.');
```
_Step3. pay attention to inverted files stored in datapart_
<img width="1497" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/42711279/1be4f9f9-33a9-4054-8569-2808e665c79a">
Datapart stored 6 index files, 2 of them is skip-index files, 4 of them is gin-index files.
_Step4. drop inverted index_
```sql
ALTER TABLE simple_table_curd DROP INDEX text_idx;
```
<img width="1494" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/42711279/7ba28714-5de0-479b-ac69-b371a38b044a">
As you can see, the two skip-index files were deleted, but the gin-index index files were not successfully removed.
**Expected behavior**
I hope that the gin-index files can also be deleted by Alter Drop Index statement.
<img width="1499" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/42711279/e35782a5-77d4-4568-ba9b-dbb3ac94865b">
| https://github.com/ClickHouse/ClickHouse/issues/59039 | https://github.com/ClickHouse/ClickHouse/pull/59040 | 8ddda0caf0cb13e8c6ad066146ec4fe8d147368f | 2cfcaf8ff4d8051ef0eb58cd12f7991f5c8930c7 | "2024-01-22T03:51:16Z" | c++ | "2024-01-29T19:39:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,997 | ["tests/queries/0_stateless/02969_functions_to_subcolumns_if_null.reference", "tests/queries/0_stateless/02969_functions_to_subcolumns_if_null.sql"] | If(Int64 IS NULL, ...) wrong data type assumed with `allow_experimental_analyzer = 1, optimize_functions_to_subcolumns = 1` | **Does it reproduce on recent release?**
Yes, 24.1
https://fiddle.clickhouse.com/ace1ecdf-83eb-4350-95a2-489c43a113b1
```
SELECT
sum(multiIf(id IS NULL, 1, 0)) -- same for case/if
FROM test
SETTINGS allow_experimental_analyzer = 1, optimize_functions_to_subcolumns = 1;
Received exception from server (version 23.12.2):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type Int64 of first argument (condition) of function if. Must be UInt8.. (ILLEGAL_TYPE_OF_ARGUMENT)
(query: SELECT
sum(multiIf(id IS NULL, 1, 0))
FROM test
SETTINGS allow_experimental_analyzer = 1, optimize_functions_to_subcolumns = 1;)
SELECT sum(If(id IS NULL, 1, 0))
FROM test,
(
SELECT number
FROM numbers(100)
) AS e
SETTINGS allow_experimental_analyzer = 1, optimize_functions_to_subcolumns = 1
Elapsed: 0.004 sec.
Received exception:
Code: 43. DB::Exception: Illegal type Int64 of last argument for aggregate function with If suffix. (ILLEGAL_TYPE_OF_ARGUMENT)
SELECT sum(case when id IS NULL then 1 else 0 end)
FROM test,
(
SELECT number
FROM numbers(100)
) AS e
SETTINGS allow_experimental_analyzer = 1, optimize_functions_to_subcolumns = 1;
Elapsed: 0.001 sec.
Received exception:
Code: 43. DB::Exception: Illegal type Int64 of first argument (condition) of function if. Must be UInt8. (ILLEGAL_TYPE_OF_ARGUMENT)
```
**Additional context**
```
Received exception from server (version 24.1.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type Int64 of first argument (condition) of function if. Must be UInt8.. (ILLEGAL_TYPE_OF_ARGUMENT)
(query: select i_item_desc
,w_warehouse_name
,d1.d_week_seq d_week_seq
,sum(case when p_promo_sk is null then 1 else 0 end) no_promo
,sum(case when p_promo_sk is not null then 1 else 0 end) promo
,count(*) total_cnt
from catalog_sales
join inventory on (cs_item_sk = inv_item_sk)
join warehouse on (w_warehouse_sk=inv_warehouse_sk)
join item on (i_item_sk = cs_item_sk)
join customer_demographics on (cs_bill_cdemo_sk = cd_demo_sk)
join household_demographics on (cs_bill_hdemo_sk = hd_demo_sk)
join date_dim d1 on (cs_sold_date_sk = d1.d_date_sk)
join date_dim d2 on (inv_date_sk = d2.d_date_sk)
join date_dim d3 on (cs_ship_date_sk = d3.d_date_sk)
left outer join promotion on (cs_promo_sk=p_promo_sk)
left outer join catalog_returns on (cr_item_sk = cs_item_sk and cr_order_number = cs_order_number)
where d1.d_week_seq = d2.d_week_seq
and inv_quantity_on_hand < cs_quantity
and d3.d_date > d1.d_date + 5
and hd_buy_potential = '1001-5000'
and d1.d_year = 2001
and cd_marital_status = 'M'
group by i_item_desc,w_warehouse_name,d1.d_week_seq
order by total_cnt desc, i_item_desc, w_warehouse_name, d_week_seq
LIMIT 100;)
```
| https://github.com/ClickHouse/ClickHouse/issues/58997 | https://github.com/ClickHouse/ClickHouse/pull/58999 | 0951cf8d647e2c4bbb915056cf15dc47ff29e1d0 | b1fdc8a2b78a843b6b756c9c3e9b1405dfc3ce16 | "2024-01-19T14:08:10Z" | c++ | "2024-01-20T08:22:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,885 | ["docs/changelogs/v23.9.1.1854-stable.md"] | Skipping index of Bloom filter for IPv6 type stops working after upgrade from 22.6 to 23.12 | On ClickHouse 22.6:
```
CREATE TABLE test
(
`ip` IPv6,
INDEX index_ip_bloom_filter ip TYPE bloom_filter GRANULARITY 5
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS index_granularity = 8192
insert into test select concat('::ffff:', IPv4NumToString(toUInt32(rand(number)%20+180000000))) from system.numbers limit 3000000
SELECT count()
FROM test
WHERE toIPv6('::ffff:10.186.149.4') = ip
SETTINGS use_skip_indexes = 0
Query id: 4a0220f7-b453-4ac0-9b7e-25bb3860aba3
ββcount()ββ
β 149610 β
βββββββββββ
SELECT count()
FROM test
WHERE toIPv6('::ffff:10.186.149.4') = ip
SETTINGS use_skip_indexes = 1
Query id: f5d21504-d600-4260-bba5-76f5595e07e6
ββcount()ββ
β 149610 β
βββββββββββ
```
Upgrade to 23.12 and query the old data generated in 22.6:
```
SELECT count()
FROM test
WHERE toIPv6('::ffff:10.186.149.4') = ip
SETTINGS use_skip_indexes = 0
Query id: 4a0220f7-b453-4ac0-9b7e-25bb3860aba3
ββcount()ββ
β 149610 β
βββββββββββ
SELECT count()
FROM test
WHERE toIPv6('::ffff:10.186.149.4') = ip
SETTINGS use_skip_indexes = 1
Query id: f5d21504-d600-4260-bba5-76f5595e07e6
ββcount()ββ
β 0 β
βββββββββββ
```
I thought this bug was fixed by https://github.com/ClickHouse/ClickHouse/pull/57707 but it appears not.
| https://github.com/ClickHouse/ClickHouse/issues/58885 | https://github.com/ClickHouse/ClickHouse/pull/59127 | 6e1465af10cd217dab31619f9e63398a2685223a | aa5e4b418beef41a3cfacadf3a0130c0fd634ef7 | "2024-01-16T23:09:43Z" | c++ | "2024-01-23T16:00:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,848 | ["src/Common/AsyncLoader.cpp"] | I've noticed this message in server logs, but it is stupid | ```
Processed: inf%
``` | https://github.com/ClickHouse/ClickHouse/issues/58848 | https://github.com/ClickHouse/ClickHouse/pull/58849 | ed1221ef4c1f1786203fd763ac81c3cad4f6fbd1 | ae1e19f42b912b01149003681575faaa3b178186 | "2024-01-16T08:43:29Z" | c++ | "2024-01-17T10:58:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,822 | ["src/DataTypes/DataTypesNumber.cpp", "tests/queries/0_stateless/02969_mysql_cast_type_aliases.reference", "tests/queries/0_stateless/02969_mysql_cast_type_aliases.sql"] | MySQL compatibility: support CAST AS SIGNED | **Use case**
Certain queries generated by Looker Studio do the following:
```sql
SELECT CAST(DATE_FORMAT(CAST(DATE_FORMAT(time, '%Y-%m-%d %H:%i:%s') AS DATETIME), '%w') AS SIGNED) AS qt_duv6kvlmdd,
COUNT(1) AS qt_jxd1bulmdd,
author
FROM commits
GROUP BY qt_duv6kvlmdd, author;
```
This example uses the [commits](https://clickhouse.com/docs/en/getting-started/example-datasets/github) dataset.
As of 23.12, the result is an error:
```
Code: 50. DB::Exception: Received from localhost:9000.
DB::Exception: Unknown data type family: SIGNED:
While processing CAST(DATE_FORMAT(CAST(DATE_FORMAT(time, '%Y-%m-%d %H:%i:%s'), 'DATETIME'), '%w'), 'SIGNED') AS qt_duv6kvlmdd.
(UNKNOWN_TYPE)
```
**Describe the solution you'd like**
No errors; `CAST x AS SIGNED` produces an Int64, `CAST x AS UNSIGNED` produces a UInt64.
See https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_cast for more details. | https://github.com/ClickHouse/ClickHouse/issues/58822 | https://github.com/ClickHouse/ClickHouse/pull/58954 | 09e24ed6c5bc1d36ffdb4ba7fefc9d22900b0d98 | 6f4c0925ab399fc248fec1b89ed6393bcf6978f0 | "2024-01-15T13:58:22Z" | c++ | "2024-01-23T06:01:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,719 | ["src/Interpreters/InterpreterCreateQuery.cpp", "tests/queries/0_stateless/02973_dictionary_table_exception_fix.reference", "tests/queries/0_stateless/02973_dictionary_table_exception_fix.sql"] | A better exception message while conflict of creating dictionary and table with the same name | (you don't have to strictly follow this form)
**Describe the issue**
I expect to see more clear exception messages in the case when creating a dictionary with the same name as a table we have. Old exception: `Code: 387. DB::Exception: Received from localhost:9000. DB::Exception: Dictionary default.n already exists.(DICTIONARY_ALREADY_EXISTS)` and it's not informative for the user because we don't have this dictionary
**How to reproduce**
```SQL
CREATE TABLE x (field Int64) engine=MergeTree order by field;
CREATE DICTIONARY x (y String, value UInt64 DEFAULT 0) PRIMARY KEY y;
```
**Expected behavior**
More clear exception messages with relevant checks like `A table with the same name already exists`.
| https://github.com/ClickHouse/ClickHouse/issues/58719 | https://github.com/ClickHouse/ClickHouse/pull/58841 | 69a1935a3e27a21c071d9821f01950bd1f887c6c | 5ba82eed81931f637183c741319ac221938eb535 | "2024-01-11T14:55:44Z" | c++ | "2024-01-29T21:41:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,714 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Planner/Planner.cpp", "tests/queries/0_stateless/02962_parallel_window_functions_different_partitioning.reference", "tests/queries/0_stateless/02962_parallel_window_functions_different_partitioning.sql"] | Wrong result in window function starting from 23.11 | **Describe what's wrong**
I have following table:
```sql
CREATE TABLE empsalary
(
`depname` LowCardinality(String),
`empno` UInt64,
`salary` Int32,
`enroll_date` Date
)
ENGINE = MergeTree
ORDER BY enroll_date;
insert into empsalary values ('develop',11,5200,2007-08-15), ('sales',3,4800,2007-08-01), ('sales',1,5000,2006-10-01), ('sales',4,4800,2007-08-08), ('personnel',2,3900,2006-12-23), ('develop',10,5200,2007-08-01), ('personnel',5,3500,2007-12-10), ('develop',7,4200,2008-01-01), ('develop',8,6000,2006-10-01), ('develop',9,4500,2008-01-01);
```
And I perform following query:
```sql
SELECT * FROM
(SELECT depname,
sum(salary) OVER (PARTITION BY depname order by empno) AS depsalary,
min(salary) OVER (PARTITION BY depname, empno order by enroll_date) AS depminsalary
FROM empsalary)
WHERE depname = 'sales'
ORDER BY depname, depsalary
```
I get following expected result:
```
sales 5000 5000
sales 9800 4800
sales 14600 4800
```
If I execute the same query on a distributed table I get wrong result:
```
β sales β 4800 β 4800 β
β sales β 5000 β 5000 β
β sales β 9800 β 4800 β
```
https://github.com/alsugiliazova/clickhouse-setup - **configuration file**
How to reproduce:
```sql
CREATE TABLE empsalary_source ON CLUSTER sharded_cluster
(
`depname` LowCardinality(String),
`empno` UInt64,
`salary` Int32,
`enroll_date` Date
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/empsalary_source', '{replica}')
ORDER BY enroll_date;
CREATE TABLE empsalary AS empsalary_source
ENGINE = Distributed(sharded_cluster, default, empsalary_source, empno % toUInt8(getMacro('shard')));
insert into empsalary values ('develop',11,5200,2007-08-15), ('sales',3,4800,2007-08-01), ('sales',1,5000,2006-10-01), ('sales',4,4800,2007-08-08), ('personnel',2,3900,2006-12-23), ('develop',10,5200,2007-08-01), ('personnel',5,3500,2007-12-10), ('develop',7,4200,2008-01-01), ('develop',8,6000,2006-10-01), ('develop',9,4500,2008-01-01);
SELECT *
FROM
(
SELECT
depname,
sum(salary) OVER (PARTITION BY depname ORDER BY empno ASC) AS depsalary,
min(salary) OVER (PARTITION BY depname, empno ORDER BY enroll_date ASC) AS depminsalary
FROM empsalary
)
WHERE depname = 'sales'
ORDER BY
depname ASC,
depsalary ASC
```
| https://github.com/ClickHouse/ClickHouse/issues/58714 | https://github.com/ClickHouse/ClickHouse/pull/58739 | e2ddaa99e242ac154169c71d268d503bbc9c46cb | 3b1d72868398a5771aa32ec13e25437fe83062d8 | "2024-01-11T12:14:04Z" | c++ | "2024-01-12T15:36:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,593 | ["src/Storages/MergeTree/MergeTreeIndexSet.cpp", "tests/queries/0_stateless/02962_analyzer_constant_set.reference", "tests/queries/0_stateless/02962_analyzer_constant_set.sql"] | [Analyzer] Set index: Cannot get value from Set (parallel_index test) | From perf tests (parallel_index):
```sql
create table test_parallel_index (x UInt64, y UInt64, z UInt64, INDEX a (y) TYPE minmax GRANULARITY 2,
INDEX b (z) TYPE set(8) GRANULARITY 2) engine = MergeTree order by x partition by bitAnd(x, 63 * 64) settings index_granularity = 4;
insert into test_parallel_index select number, number, number from numbers(1048576);
select sum(z) from test_parallel_index where z = 2 or z = 7 or z = 13 or z = 17 or z = 19 or z = 23;
```
With the analyzer you get: `Code: 48. DB::Exception: Cannot get value from Set. (NOT_IMPLEMENTED)`
Works fine without it
| https://github.com/ClickHouse/ClickHouse/issues/58593 | https://github.com/ClickHouse/ClickHouse/pull/58657 | d10a6e91af82c470b675482bb4955e55ef9097ca | 211c285a0bfc7c7da4e286144766c667ca4fcdd3 | "2024-01-08T09:42:25Z" | c++ | "2024-01-13T18:23:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,590 | ["src/Functions/makeDate.cpp", "tests/queries/0_stateless/02245_make_datetime64.reference", "tests/queries/0_stateless/02245_make_datetime64.sql"] | makeDateTime64 only accepts constant fraction | **Describe what's wrong**
The function `makeDateTime64` only accepts constant fractions-of-a-second
https://fiddle.clickhouse.com/8656706e-975f-49bd-b653-17cbb4e4360b
```
SELECT makeDateTime64(1970, 1, 1, 12, 0, 0, rand(), 9, 'UTC');
Received exception from server (version 23.12.2):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type of argument #7 'fraction' of function makeDateTime64, expected const Number, got UInt32: While processing makeDateTime64(1970, 1, 1, 12, 0, 0, rand(), 9, 'UTC'). (ILLEGAL_COLUMN)
(query: SELECT makeDateTime64(1970, 1, 1, 12, 0, 0, rand(), 9, 'UTC'))
```
**Does it reproduce on recent release?**
Reproduces on >= 23.9
On 23.8, evaluates to the max value
https://fiddle.clickhouse.com/323b5b56-5956-4eef-b718-47d920b4b82e
```
SELECT makeDateTime64(1970, 1, 1, 12, 0, 0, rand(), 9, 'UTC');
1970-01-01 12:00:00.999999999
```
**Expected behavior**
This function should accept non-constant data (e.g. identifier for a column)
| https://github.com/ClickHouse/ClickHouse/issues/58590 | https://github.com/ClickHouse/ClickHouse/pull/58597 | 21e4e0537d6d0dcb65291caef25a740d54c060cd | 22a0e085b7e2518ffbf0ab5a1171fcfa7d3d463d | "2024-01-08T01:47:49Z" | c++ | "2024-01-09T11:17:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,451 | ["src/Processors/Merges/Algorithms/Graphite.h", "tests/config/config.d/graphite_alternative.xml", "tests/queries/0_stateless/02910_replicated_merge_parameters_must_consistent.sql"] | Segfault during startup after upgrading to 23.12.1.1368 | Hello,
we are using CH solely as a backend for Graphite Clickhouse [https://github.com/go-graphite/graphite-clickhouse](url), in a 14 node cluster (7 shards with 2 replicas), running on Debian 10. After upgrading from version 23.11.2.11 to version 23.12.1.1368, we get a segfault during startup of clickhouse-server. Upgrading to version 23.11.3.23 works fine.
**How to reproduce**
Upgrade clickhouse-server to latest version 23.12.1.1368, and restart the service.
* Which ClickHouse server version to use
23.12.1.1368
*Stack trace
```
2024.01.03 10:55:16.257352 [ 5207 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
2024.01.03 10:55:16.257421 [ 5207 ] {} <Fatal> BaseDaemon: (version 23.12.1.1368 (official build), build id: 19BC8CB87C441A86F879D18F5C18F17BA08526EE, git hash: a2faa65b080a587026c86844f3a20c74d23a86f8) (from thread 4678) Received signal 11
2024.01.03 10:55:16.257471 [ 5207 ] {} <Fatal> BaseDaemon: Signal description: Segmentation fault
2024.01.03 10:55:16.257498 [ 5207 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2024.01.03 10:55:16.257520 [ 5207 ] {} <Fatal> BaseDaemon: Stack trace: 0x0000000012429abc 0x00000000124230bb 0x0000000011c1132a 0x000000001237eae9 0x000000001034e13f 0x000000001034f191 0x000000000c7be158 0x00007f6841afffa3 0x00007f6841a3106f
2024.01.03 10:55:16.257577 [ 5207 ] {} <Fatal> BaseDaemon: ########################################
2024.01.03 10:55:16.257609 [ 5207 ] {} <Fatal> BaseDaemon: (version 23.12.1.1368 (official build), build id: 19BC8CB87C441A86F879D18F5C18F17BA08526EE, git hash: a2faa65b080a587026c86844f3a20c74d23a86f8) (from thread 4678) (no query) Received signal Segmentation fault (11)
2024.01.03 10:55:16.257632 [ 5207 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2024.01.03 10:55:16.257647 [ 5207 ] {} <Fatal> BaseDaemon: Stack trace: 0x0000000012429abc 0x00000000124230bb 0x0000000011c1132a 0x000000001237eae9 0x000000001034e13f 0x000000001034f191 0x000000000c7be158 0x00007f6841afffa3 0x00007f6841a3106f
2024.01.03 10:55:16.257763 [ 5207 ] {} <Fatal> BaseDaemon: 2. DB::Graphite::Pattern::updateHash(SipHash&) const @ 0x0000000012429abc in /usr/bin/clickhouse
2024.01.03 10:55:16.257815 [ 5207 ] {} <Fatal> BaseDaemon: 3. DB::ReplicatedMergeTreeTableMetadata::ReplicatedMergeTreeTableMetadata(DB::MergeTreeData const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x00000000124230bb in /usr/bin/clickhouse
2024.01.03 10:55:16.259803 [ 5207 ] {} <Fatal> BaseDaemon: 4. DB::StorageReplicatedMergeTree::checkTableStructure(String const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool) @ 0x0000000011c1132a in /usr/bin/clickhouse
2024.01.03 10:55:16.259827 [ 5207 ] {} <Fatal> BaseDaemon: 5. DB::ReplicatedMergeTreeAttachThread::run() @ 0x000000001237eae9 in /usr/bin/clickhouse
2024.01.03 10:55:16.259852 [ 5207 ] {} <Fatal> BaseDaemon: 6. DB::BackgroundSchedulePool::threadFunction() @ 0x000000001034e13f in /usr/bin/clickhouse
2024.01.03 10:55:16.259893 [ 5207 ] {} <Fatal> BaseDaemon: 7. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000001034f191 in /usr/bin/clickhouse
2024.01.03 10:55:16.259931 [ 5207 ] {} <Fatal> BaseDaemon: 8. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7be158 in /usr/bin/clickhouse
2024.01.03 10:55:16.260665 [ 5207 ] {} <Fatal> BaseDaemon: 9. start_thread @ 0x0000000000007fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2024.01.03 10:55:16.260687 [ 5207 ] {} <Fatal> BaseDaemon: 10. ? @ 0x00000000000f906f in /lib/x86_64-linux-gnu/libc-2.28.so
2024.01.03 10:55:16.597142 [ 5207 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: E62A564F8077C0AF79E8E5726EE71488)
2024.01.03 10:55:16.600962 [ 5207 ] {} <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
2024.01.03 10:55:20.708171 [ 4444 ] {} <Fatal> Application: Child process was terminated by signal 11.
```
I have also tried searching through the logs for thread 4678, but there is nothing in the logs. Please let me know if you require any further information.
| https://github.com/ClickHouse/ClickHouse/issues/58451 | https://github.com/ClickHouse/ClickHouse/pull/58453 | 971d030ec5adc5773c8fd69d5986ae0a97efe005 | 2593a566eb2d74640b1c9b569b6bb95c19c38325 | "2024-01-03T11:43:02Z" | c++ | "2024-01-04T13:44:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,405 | ["src/Storages/Kafka/StorageKafka.cpp", "src/Storages/Kafka/StorageKafka.h", "tests/integration/test_storage_kafka/test.py"] | Materialized View Regression for Kafka input tables (23.12.1 version) | **Describe what's wrong**
We are using Kafka Engine table to ingest the data from Kafka to Clickhouse. Then we connect materialized view to it and store the results to target table. As we want to achieve first write wins strategy we need to check the target contents first to deduplicate incoming records. The fastest way based on benchmark was to use IN operator (JOINS were slow). To be able to do that we need to read records from source table (insert batch) twice inside materialized view. AFAIK it should be supported.
This stops to work in 23.12 with this error:
```
com.clickhouse.client.ClickHouseException: Code: 620. DB::Exception: Direct select is not allowed. To enable use setting
`stream_like_engine_allow_direct_select`: While processing id IN ((SELECT id FROM kafka_input GROUP BY id) AS _subquery72):
While processing id NOT IN ((SELECT id FROM deduplicate WHERE id IN (SELECT id FROM kafka_input GROUP BY id)) AS
_subquery71). (QUERY_NOT_ALLOWED) (version 23.12.1.1368 (official build))
```
The view looks like this. Notice that we query FROM kafka_input twice (source table for materialized view)
```
CREATE MATERIALIZED VIEW IF NOT EXISTS deduplicate_mv TO deduplicate
AS SELECT
id,time,any(value) AS value
FROM kafka_input
WHERE id NOT IN (
SELECT id FROM deduplicate WHERE id IN (
SELECT id FROM kafka_input GROUP BY id)
)
GROUP BY id,time;
```
Similar example using NULL engine works as expected. Here is a fiddle:
https://fiddle.clickhouse.com/a0207085-22c8-44f4-9c7f-acf751058644
**Does it reproduce on recent release?**
Yes. It is a regression of 23.12 release.
**How to reproduce**
* Use ClickHouse 23.12
* Use Kafka engine input table
* Use Materialized view that uses Kafka input table twice as in our example
**Expected behavior**
It should be possible to create such a materialized view as query is not querying kafka table itself, but the batch of inserts produced by Kafka engine table.
| https://github.com/ClickHouse/ClickHouse/issues/58405 | https://github.com/ClickHouse/ClickHouse/pull/58477 | 9cfdff2ddb280b5db537d210e76e4a606878e460 | 3e1d7bf6853e029d7bba0a03742e5a95999362c3 | "2024-01-02T10:21:19Z" | c++ | "2024-01-15T11:31:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,355 | ["src/Functions/format.cpp", "tests/queries/0_stateless/02956_format_constexpr.reference", "tests/queries/0_stateless/02956_format_constexpr.sql"] | `format` is not a constant expression | **Use case**
```
SELECT isConstant(format('{}, world', 'Hello'))
```
This should return 1. | https://github.com/ClickHouse/ClickHouse/issues/58355 | https://github.com/ClickHouse/ClickHouse/pull/58358 | 1d344026be3efd9e1d407b688ce1062ed709708f | ebd95586d223c28b87e257b71909e0970e47abd3 | "2023-12-29T17:50:47Z" | c++ | "2023-12-30T11:34:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,283 | ["docs/en/sql-reference/aggregate-functions/reference/sparkbar.md", "src/AggregateFunctions/AggregateFunctionSparkbar.cpp", "tests/queries/0_stateless/02955_sparkBar_alias_sparkbar.reference", "tests/queries/0_stateless/02955_sparkBar_alias_sparkbar.sql"] | `sparkBar` as an alias to `sparkbar` | **Use case**
For some reason, I often write `sparkBar` by mistake. Maybe other people also do. | https://github.com/ClickHouse/ClickHouse/issues/58283 | https://github.com/ClickHouse/ClickHouse/pull/58335 | ef5837a00830338a0a27ea1eec78616125366c02 | 09181b6c3759b8f24a9a60a97c3204a557235b0f | "2023-12-28T09:50:09Z" | c++ | "2023-12-29T12:02:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,241 | ["src/Functions/transform.cpp", "tests/queries/0_stateless/02958_transform_enum.reference", "tests/queries/0_stateless/02958_transform_enum.sql"] | `transform` does not work for `Enum` | ```
milovidov-desktop :) WITH arrayJoin(['Hello', 'world'])::Enum('Hello', 'world') AS x SELECT transform(x, ['Hello', 'world'], [123, 456], 0)
Received exception:
Code: 43. DB::Exception: First argument and elements of array of the second argument of function transform must have compatible types: While processing transform(CAST(arrayJoin(['Hello', 'world']), 'Enum(\'Hello\', \'world\')') AS x, ['Hello', 'world'], [123, 456], 0). (ILLEGAL_TYPE_OF_ARGUMENT)
```
```
milovidov-desktop :) WITH arrayJoin(['Hello', 'world'])::Enum('Hello', 'world') AS x SELECT CASE x WHEN 'Hello' THEN 123 WHEN 'world' THEN 456 END
Received exception:
Code: 43. DB::Exception: First argument and elements of array of the second argument of function transform must have compatible types. (ILLEGAL_TYPE_OF_ARGUMENT)
```
| https://github.com/ClickHouse/ClickHouse/issues/58241 | https://github.com/ClickHouse/ClickHouse/pull/58360 | c0290d1cfde7a7df26d2eba6ab003bac5db9220b | 1d344026be3efd9e1d407b688ce1062ed709708f | "2023-12-27T11:32:04Z" | c++ | "2023-12-30T11:32:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,180 | ["src/Formats/registerFormats.cpp", "src/Processors/Formats/Impl/JSONCompactEachRowRowInputFormat.cpp", "tests/queries/0_stateless/02951_data.jsonl.zst", "tests/queries/0_stateless/02951_parallel_parsing_json_compact_each_row.reference", "tests/queries/0_stateless/02951_parallel_parsing_json_compact_each_row.sh"] | Parallel parsing does not work for `JSONCompactEachRow` | **Describe the unexpected behaviour**
```
SELECT sum(ignore(*)) FROM file('194510.data.jsonl', JSONCompactEachRow, '
time_offset Decimal64(3),
lat Float64,
lon Float64,
altitude String,
ground_speed Float32,
track_degrees Float32,
flags UInt32,
vertical_rate Int32,
aircraft Tuple(
alert Int64,
alt_geom Int64,
gva Int64,
nac_p Int64,
nac_v Int64,
nic Int64,
nic_baro Int64,
rc Int64,
sda Int64,
sil Int64,
sil_type String,
spi Int64,
track Float64,
type String,
version Int64,
category String,
emergency String,
flight String,
squawk String,
baro_rate Int64,
nav_altitude_fms Int64,
nav_altitude_mcp Int64,
nav_modes Array(String),
nav_qnh Float64,
geom_rate Int64,
ias Int64,
mach Float64,
mag_heading Float64,
oat Int64,
roll Float64,
tas Int64,
tat Int64,
true_heading Float64,
wd Int64,
ws Int64,
track_rate Float64,
nav_heading Float64
),
source LowCardinality(String),
geometric_altitude Int32,
geometric_vertical_rate Int32,
indicated_airspeed Int32,
roll_angle Float32,
hex String
')
```
[194510.data.jsonl.zst.txt](https://github.com/ClickHouse/ClickHouse/files/13758041/194510.data.jsonl.zst.txt)
Produces an error:
```
Received exception:
Code: 27. DB::ParsingException: Cannot parse input: expected '[' before: ',\n[59506.0,39.347031,-94.713433,1375,133.6,193.4,0,-896,null,"adsb_icao",1400,-576,null,null,"f899e0b7"],\n[59507.0,39.346370,-94.713623,1375,134.6,193.3,0,-704,':
Row 1:
ERROR: There is no '[' before the row.
: While executing ParallelParsingBlockInputFormat: While executing File: (in file/uri /tmp/194510.data.jsonl): (at row 54240)
. (CANNOT_PARSE_INPUT_ASSERTION_FAILED)
```
Works if I do `SET input_format_parallel_parsing = 0` | https://github.com/ClickHouse/ClickHouse/issues/58180 | https://github.com/ClickHouse/ClickHouse/pull/58181 | d4c7462eef38747209959b276c40cf6700d78d92 | ff6419361a6c68cab35f30d1e6aaf4042122166e | "2023-12-23T05:04:32Z" | c++ | "2023-12-23T09:16:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,172 | ["src/Storages/MergeTree/RPNBuilder.cpp", "tests/queries/0_stateless/02962_indexHint_rpn_construction.reference", "tests/queries/0_stateless/02962_indexHint_rpn_construction.sql"] | Segfault in `MergeTreeIndexConditionBloomFilter::alwaysUnknownOrTrue()` | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/58051/bbd7bc0dd9c31b2af69ca42ea19b78e830e39bc3/fuzzer_astfuzzerasan/report.html\r\n\r\n```sql\r\nCREATE TABLE tab\r\n(\r\n `foo` Array(LowCardinality(String)),\r\n INDEX idx foo TYPE bloom_filter GRANULARITY 1\r\n)\r\nENGINE = MergeTree\r\nPRIMARY KEY tuple();\r\n\r\nINSERT INTO tab SELECT if(number % 2, ['value'], [])\r\nFROM system.numbers\r\nLIMIT 10000;\r\n\r\nset allow_experimental_analyzer = 1;\r\n\r\nSELECT *\r\nFROM tab\r\nPREWHERE indexHint(indexHint(-1, 0.))\r\nWHERE has(foo, 'b');\r\n```\r\n\r\nException on client:\r\n```\r\nCode: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF)\r\n\r\n1477758:2023.12.22 20:18:13.397693 [ 159142 ] {} <Fatal> BaseDaemon: ########## Short fault info ############\r\n1477759:2023.12.22 20:18:13.398066 [ 159142 ] {} <Fatal> BaseDaemon: (version 23.12.1.1, build id: 0EDF26E5AAE6D47134D8EF2DB475537CA0F237A7, git hash: b8d274d070b89bdfee578492f8210cd96859fdd8) (from thread 28612) Received signal 11\r\n1477760:2023.12.22 20:18:13.407030 [ 159142 ] {} <Fatal> BaseDaemon: Signal description: Segmentation fault\r\n1477761:2023.12.22 20:18:13.407289 [ 159142 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.\r\n1477762:2023.12.22 20:18:13.407505 [ 159142 ] {} <Fatal> BaseDaemon: Stack trace: 0x000000001236e0eb 0x000000001f0d115f 0x000000002018772c 0x0000000020185f1b 0x0000000020237d2a 0x000000002015c671 0x000000002015c113 0x000000001d88b0e6 0x000000001d88add1 0x000000001dedb1d6 0x000000001ded60de 0x000000001f7ba2e0 0x000000001f7cf872 0x00000000245f0299 0x00000000245f0ae8 0x00000000247f66a1 0x00000000247f2e9a 0x00000000247f18f5 0x00007f26214e29eb 0x00007f26215667cc\r\n1477763:2023.12.22 20:18:13.407825 [ 159142 ] {} <Fatal> BaseDaemon: ########################################\r\n1477764:2023.12.22 20:18:13.412282 [ 159142 ] {} <Fatal> BaseDaemon: (version 23.12.1.1, build id: 0EDF26E5AAE6D47134D8EF2DB475537CA0F237A7, git hash: b8d274d070b89bdfee578492f8210cd96859fdd8) (from thread 28612) (query_id: 6f0874e5-9c4d-498a-9fa7-fa0ccf9b6e5e) (query: SELECT * FROM tab PREWHERE indexHint(indexHint(-1, 0.)) WHERE has(foo, 'b')) Received signal Segmentation fault (11)\r\n1477765:2023.12.22 20:18:13.412819 [ 159142 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.\r\n1477766:2023.12.22 20:18:13.413178 [ 159142 ] {} <Fatal> BaseDaemon: Stack trace: 0x000000001236e0eb 0x000000001f0d115f 0x000000002018772c 0x0000000020185f1b 0x0000000020237d2a 0x000000002015c671 0x000000002015c113 0x000000001d88b0e6 0x000000001d88add1 0x000000001dedb1d6 0x000000001ded60de 0x000000001f7ba2e0 0x000000001f7cf872 0x00000000245f0299 0x00000000245f0ae8 0x00000000247f66a1 0x00000000247f2e9a 0x00000000247f18f5 0x00007f26214e29eb 0x00007f26215667cc\r\n\r\n```\r\n\r\ncc: @zhang2014, @novikd " | https://github.com/ClickHouse/ClickHouse/issues/58172 | https://github.com/ClickHouse/ClickHouse/pull/58875 | 2f68086db81d6d79b65e135b2f2bc0ed4ce3c098 | a5e0be484a8589a33e715da3b1633f279f37dffe | "2023-12-22T19:20:08Z" | c++ | "2024-01-17T11:24:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,149 | ["src/Common/ICachePolicyUserQuota.h", "src/Common/TTLCachePolicy.h", "tests/queries/0_stateless/02494_query_cache_user_quotas_after_drop.reference", "tests/queries/0_stateless/02494_query_cache_user_quotas_after_drop.sql"] | is query_cache_max_entries working? | version 23.11.3.23
This one is a bit odd. I swear I saw it working ONCE, but never after this again. Restarting clickhouse-server doesn't help. I hope this is reproducible this way:
```sql
system drop query cache;
set query_cache_max_entries = 0; -- default setting
select 1 settings use_query_cache=1;
select 2 settings use_query_cache=1;
set query_cache_max_entries = 3;
select 3 settings use_query_cache=1;
select 4 settings use_query_cache=1;
select * from system.query_cache;
```
The cache should now show 3 entries. (I believe this was also the case for me).
Now repeat the same procedure (dropping query cache also first). And the cache will remain empty. After this only `set query_cache_max_entries = 0;` will store new entries in the cache.
Is anyone seeing this behavior as well?
| https://github.com/ClickHouse/ClickHouse/issues/58149 | https://github.com/ClickHouse/ClickHouse/pull/58731 | 1d34f4a3047323f43675b3a11ddf60b2b5ac3ee8 | 9146391e4810990146dfaf864262546587f394cf | "2023-12-22T13:53:50Z" | c++ | "2024-01-16T16:14:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,096 | ["docs/en/sql-reference/functions/tuple-functions.md", "src/Functions/array/arrayDistance.cpp", "src/Functions/array/arrayDotProduct.cpp", "src/Functions/array/arrayScalarProduct.h", "tests/performance/dotProduct.xml", "tests/performance/norm_distance.xml", "tests/queries/0_stateless/02708_dotProduct.reference", "tests/queries/0_stateless/02708_dotProduct.sql", "tests/queries/0_stateless/02708_dot_product.reference", "tests/queries/0_stateless/02708_dot_product.sql"] | `dotProduct` is not optimized for vector Similarity | ## Background
Vector similarity is very popular right now, and ClickHouse has a few tutorials about this [Source](https://clickhouse.com/blog/approximate-nearest-neighbour-ann-with-sql-powered-local-sensitive-hashing-lsh-random-projections). Regarding distances between vectors, they mention the most common ones:
- Euclidean distance
- Cosine distance

But in my opinion dotProduct is a better replacement for cosineDistance because **is the same but faster**, let me explain...
## cosineDistance == dotProduct of L2 normalized vectors
`dotProduct` has a much cheaper computation than `cosineDistance`. But they produce different things.

However, **the dotProduct of the L2-normalized vectors** is mathematically the same that cosineDistance, but computationally cheaper because only the norm of the query verctor is computed at search time. The following formula illustrate this:

> L2 normalization is dividing (element-wise) the vector by its L2Norm. This produces the unit vector (vector lenght = 1) pointing to the same direction of the original vector.
## Poor performance of `dotProduct` in clickhouse now
Both `cosineDistance` and `dotProduct` exist in clickhouse. `dotProduct` should be faster, but currently is (~16x) slower .
```sql
WITH cosineDistance(img_emb, <query_emb>) AS score
SELECT id, score, caption
FROM laion
ORDER BY score DESC
LIMIT 3
# 3 rows in set. Elapsed: 0.535 sec. Processed 1.00 million rows, 2.13 GB (1.87 million rows/s., 3.98 GB/s.) Peak memory usage: 164.25 MiB.
WITH dotProduct(img_emb, <query_emb>) AS score
SELECT id, score, caption
FROM laion
ORDER BY score DESC
LIMIT 3
# 3 rows in set. Elapsed: 8.385 sec. Processed 1.00 million rows, 2.13 GB (119.31 thousand rows/s., 254.04 MB/s.) Peak memory usage: 420.67 MiB.
```
## Possible solution
I know projects like [SimSIMD](https://github.com/ashvardanian/SimSIMD) could help with this optimization.
| https://github.com/ClickHouse/ClickHouse/issues/58096 | https://github.com/ClickHouse/ClickHouse/pull/60202 | 659d960990a1d7dfba0e8004e2f4dde8fa79c22d | 2d2acd6bf999a25b2cd1f3e018935d2f40eb5bbb | "2023-12-20T20:47:02Z" | c++ | "2024-02-22T10:19:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 58,080 | ["src/Analyzer/Utils.cpp", "src/Analyzer/Utils.h", "src/Interpreters/InterpreterSelectQueryAnalyzer.cpp"] | Query from `02479_race_condition_between_insert_and_droppin_mv` may get stuck | https://s3.amazonaws.com/clickhouse-test-reports/57906/e4a831be17f85e345c5c9cd8f00bd22b8e759692/stateless_tests__release__analyzer_.html
```
2023-12-15 21:11:56 Found hung queries in processlist:
2023-12-15 21:11:56 Row 1:
2023-12-15 21:11:56 ββββββ
2023-12-15 21:11:56 is_initial_query: 1
2023-12-15 21:11:56 user: default
2023-12-15 21:11:56 query_id: 5c8c78d8-70f4-474b-a02c-090cdd83099d
2023-12-15 21:11:56 address: ::1
2023-12-15 21:11:56 port: 40178
2023-12-15 21:11:56 initial_user: default
2023-12-15 21:11:56 initial_query_id: 5c8c78d8-70f4-474b-a02c-090cdd83099d
2023-12-15 21:11:56 initial_address: ::1
2023-12-15 21:11:56 initial_port: 40178
2023-12-15 21:11:56 interface: 1
2023-12-15 21:11:56 os_user:
2023-12-15 21:11:56 client_hostname: e52d4065eab8
2023-12-15 21:11:56 client_name: ClickHouse client
2023-12-15 21:11:56 client_revision: 54466
2023-12-15 21:11:56 client_version_major: 23
2023-12-15 21:11:56 client_version_minor: 12
2023-12-15 21:11:56 client_version_patch: 1
2023-12-15 21:11:56 http_method: 0
2023-12-15 21:11:56 http_user_agent:
2023-12-15 21:11:56 http_referer:
2023-12-15 21:11:56 forwarded_for:
2023-12-15 21:11:56 quota_key:
2023-12-15 21:11:56 distributed_depth: 0
2023-12-15 21:11:56 elapsed: 3455.436973
2023-12-15 21:11:56 is_cancelled: 0
2023-12-15 21:11:56 is_all_data_sent: 0
2023-12-15 21:11:56 read_rows: 250886
2023-12-15 21:11:56 read_bytes: 2007088
2023-12-15 21:11:56 total_rows_approx: 249886
2023-12-15 21:11:56 written_rows: 501
2023-12-15 21:11:56 written_bytes: 17008
2023-12-15 21:11:56 memory_usage: 7339080
2023-12-15 21:11:56 peak_memory_usage: 9433904
2023-12-15 21:11:56 query: INSERT INTO test_race_condition_landing SELECT number, toString(number), toString(number) from system.numbers limit 1386, 500
2023-12-15 21:11:56 query_kind: Insert
2023-12-15 21:11:56 thread_ids: [9696,1946,31713,32004,32007,31813,31720,9798,31709,2027,31587,31834,31651,32121,31589,9861,31783,9830,1973,31685,1518,2711,9707,31629,9708,2724,1478,31742,9820,1943,31741,2244,31635,1953,2282,31870,1509,2261,2014,31987,2248,1903,2012,31972,2290,9785,31657,9832,1939,31690,31656,2700,31638,1964,31807,2427]
2023-12-15 21:11:56 ProfileEvents: {'Query':1,'InsertQuery':1,'QueriesWithSubqueries':9,'SelectQueriesWithSubqueries':7,'InsertQueriesWithSubqueries':2,'FileOpen':42,'ReadBufferFromFileDescriptorReadBytes':994353,'WriteBufferFromFileDescriptorWrite':9,'WriteBufferFromFileDescriptorWriteBytes':6785,'ReadCompressedBytes':926531,'CompressedReadBufferBlocks':61,'CompressedReadBufferBytes':1986016,'OpenedFileCacheMisses':33,'OpenedFileCacheMicroseconds':35,'IOBufferAllocs':87,'IOBufferAllocBytes':3763628,'ArenaAllocChunks':2,'ArenaAllocBytes':8192,'FunctionExecute':6,'MarkCacheHits':11,'MarkCacheMisses':11,'CreatedReadBufferOrdinary':33,'DiskReadElapsedMicroseconds':1870,'DiskWriteElapsedMicroseconds':170,'NetworkReceiveElapsedMicroseconds':1650,'NetworkSendElapsedMicroseconds':20510,'NetworkSendBytes':267439,'InsertedRows':501,'InsertedBytes':17008,'SelectedParts':22,'SelectedRanges':22,'SelectedMarks':50,'SelectedRows':250886,'SelectedBytes':2007088,'WaitMarksLoadMicroseconds':3037,'BackgroundLoadingMarksTasks':22,'LoadedMarksCount':108,'LoadedMarksMemoryBytes':1344,'MergeTreeDataWriterRows':500,'MergeTreeDataWriterUncompressedBytes':17000,'MergeTreeDataWriterCompressedBytes':6438,'MergeTreeDataWriterBlocks':1,'MergeTreeDataWriterBlocksAlreadySorted':1,'InsertedCompactParts':1,'ContextLock':98,'RWLockAcquiredReadLocks':21,'PartsLockHoldMicroseconds':156,'RealTimeMicroseconds':15054340,'UserTimeMicroseconds':14021392,'SystemTimeMicroseconds':367199,'SoftPageFaults':7053,'OSCPUWaitMicroseconds':23,'OSCPUVirtualTimeMicroseconds':4207,'OSReadChars':1040662,'CannotWriteToWriteBufferDiscard':3,'QueryProfilerRuns':10291,'ThreadPoolReaderPageCacheMiss':61,'ThreadPoolReaderPageCacheMissBytes':994353,'ThreadPoolReaderPageCacheMissElapsedMicroseconds':1870,'SynchronousReadWaitMicroseconds':6437,'LogTrace':39,'LogDebug':46,'LogInfo':1}
2023-12-15 21:11:56 Settings: {'connect_timeout_with_failover_ms':'2000','connect_timeout_with_failover_secure_ms':'3000','idle_connection_timeout':'36000','s3_check_objects_after_upload':'1','stream_like_engine_allow_direct_select':'1','replication_wait_for_inactive_replica_timeout':'30','allow_nonconst_timezone_arguments':'1','log_queries':'1','insert_quorum_timeout':'60000','fsync_metadata':'0','http_send_timeout':'60','http_receive_timeout':'60','opentelemetry_start_trace_probability':'0.1','allow_experimental_analyzer':'1','max_untracked_memory':'1048576','memory_profiler_step':'1048576','log_comment':'02479_race_condition_between_insert_and_droppin_mv.sh','send_logs_level':'error','allow_introspection_functions':'1','database_atomic_wait_for_drop_and_detach_synchronously':'1','distributed_ddl_entry_format_version':'6','async_insert_busy_timeout_ms':'5000','enable_filesystem_cache':'1','enable_filesystem_cache_on_write_operations':'1','filesystem_cache_segments_batch_size':'10','load_marks_asynchronously':'1','allow_prefetched_read_pool_for_remote_filesystem':'0','allow_prefetched_read_pool_for_local_filesystem':'0','filesystem_prefetch_max_memory_usage':'1073741824','insert_keeper_max_retries':'20','insert_keeper_fault_injection_probability':'0.01'}
2023-12-15 21:11:56 current_database: test_9pzvs9be
2023-12-15 21:11:56 stacktraces: Thread ID 2427
2023-12-15 21:11:56 ::
2023-12-15 21:11:56 ::
2023-12-15 21:11:56 ./base/poco/Foundation/src/Event_POSIX.cpp:145::Poco::EventImpl::waitImpl(long)
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:96::DB::CompletedPipelineExecutor::execute()
2023-12-15 21:11:56 ./build_docker/./src/Server/TCPHandler.cpp:548::DB::TCPHandler::runImpl()
2023-12-15 21:11:56 ./build_docker/./src/Server/TCPHandler.cpp:2294::DB::TCPHandler::run()
2023-12-15 21:11:56 ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57::Poco::Net::TCPServerConnection::start()
2023-12-15 21:11:56 ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48::Poco::Net::TCPServerDispatcher::run()
2023-12-15 21:11:56 ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202::Poco::PooledThread::run()
2023-12-15 21:11:56 ./base/poco/Foundation/include/Poco/SharedPtr.h:231::Poco::ThreadImpl::runnableEntry(void*)
2023-12-15 21:11:56 ::
2023-12-15 21:11:56 ::Thread ID 31656
2023-12-15 21:11:56 ./contrib/jemalloc/include/jemalloc/internal/sz.h:191::operator delete(void*, unsigned long)
2023-12-15 21:11:56 ./contrib/llvm-project/libcxx/include/new:0::std::__1::__hash_table<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>, std::__1::__unordered_map_hasher<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, true>, std::__1::__unordered_map_equal<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, true>, std::__1::allocator<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>>>::__move_assign(std::__1::__hash_table<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>, std::__1::__unordered_map_hasher<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, true>, std::__1::__unordered_map_equal<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, true>, std::__1::allocator<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, unsigned long>>>&, std::__1::integral_constant<bool, true>)
2023-12-15 21:11:56 ./src/Core/Block.h:25::DB::HashJoin::joinBlock(DB::Block&, std::__1::shared_ptr<DB::ExtraBlock>&)
2023-12-15 21:11:56 ./build_docker/./src/Processors/Transforms/JoiningTransform.cpp:0::DB::JoiningTransform::work()
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:0::DB::ExecutionThreadContext::executeTask()
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273::DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*)
2023-12-15 21:11:56 ./contrib/llvm-project/libcxx/include/atomic:958::DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*)
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/PullingPipelineExecutor.cpp:54::DB::PullingPipelineExecutor::pull(DB::Chunk&)
2023-12-15 21:11:56 ./build_docker/./src/Processors/Transforms/buildPushingToViewsChain.cpp:648::DB::ExecutingInnerQueryFromViewTransform::onGenerate()
2023-12-15 21:11:56 ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:164::void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_2, void ()>>(std::__1::__function::__policy_storage const*)
2023-12-15 21:11:56 ./contrib/llvm-project/libcxx/include/__functional/function.h:848::DB::runStep(std::__1::function<void ()>, DB::ThreadStatus*, std::__1::atomic<unsigned long>*)
2023-12-15 21:11:56 ./contrib/llvm-project/libcxx/include/__functional/function.h:818::DB::ExceptionKeepingTransform::work()
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:0::DB::ExecutionThreadContext::executeTask()
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273::DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*)
2023-12-15 21:11:56 ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701::DB::PipelineExecutor::execute(unsigned long, bool)
2023-12-15 21:11:56 ./build_docker/./src/Processors/Executors/CompletedPipelineExecutor.cpp:0::void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::CompletedPipelineExecutor::execute()::$_0>(DB::CompletedPipelineExecutor::execute()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*)
2023-12-15 21:11:56 ./base/base/../base/wide_integer_impl.h:809::void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*)
2023-12-15 21:11:56 ::
2023-12-15 21:11:56 ::
```
The query is waiting for something for 3455 seconds
~~cc: @AlfVII, @Avogar~~ | https://github.com/ClickHouse/ClickHouse/issues/58080 | https://github.com/ClickHouse/ClickHouse/pull/58958 | 7ae7086a2d67ab6db1e9f93b5001d456e9fa76cd | a2d7bcbd5f6908c308b665c53f88aab713c9a2df | "2023-12-20T16:13:21Z" | c++ | "2024-01-25T08:25:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,987 | ["src/Interpreters/RequiredSourceColumnsVisitor.cpp", "tests/queries/0_stateless/02946_literal_alias_misclassification.reference", "tests/queries/0_stateless/02946_literal_alias_misclassification.sql"] | When literal alias matches subquery alias, 'Not found column xx in block' error occurs. | ```
CREATE TABLE test
(
`id` Int64,
`a` Nullable(String),
`b` Nullable(Int64)
)
ENGINE = MergeTree
ORDER BY id;
```
```
SELECT 'const' AS r, b
FROM
( SELECT a AS r, b FROM test) AS t1
LEFT JOIN
( SELECT a AS r FROM test) AS t2
ON t1.r = t2.r
ORDER BY b;
```
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column r in block. There are only columns: b. (NOT_FOUND_COLUMN_IN_BLOCK)

| https://github.com/ClickHouse/ClickHouse/issues/57987 | https://github.com/ClickHouse/ClickHouse/pull/57988 | af32b33e934b1c7dbf7473afdc7da31a92f0d3c7 | 6830954cd3be914fd0854a7360604b9d73698149 | "2023-12-18T12:02:51Z" | c++ | "2023-12-20T00:36:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,937 | ["src/Analyzer/Passes/LogicalExpressionOptimizerPass.cpp", "src/Analyzer/Passes/LogicalExpressionOptimizerPass.h", "src/Core/Settings.h", "tests/queries/0_stateless/02477_logical_expressions_optimizer_low_cardinality.reference", "tests/queries/0_stateless/02477_logical_expressions_optimizer_low_cardinality.sql", "tests/queries/0_stateless/02668_logical_optimizer_removing_redundant_checks.reference", "tests/queries/0_stateless/02668_logical_optimizer_removing_redundant_checks.sql", "tests/queries/0_stateless/02952_conjunction_optimization.reference", "tests/queries/0_stateless/02952_conjunction_optimization.sql"] | notEquals(x, 1) AND notEquals(x, 2) AND notEquals(x, 3) chain rewrite to x NOT IN (1, 2, 3) | **Use case**
Optimization for long chain of `AND notEquals` condition, analogue for already existing `OR equals` optimization.
| https://github.com/ClickHouse/ClickHouse/issues/57937 | https://github.com/ClickHouse/ClickHouse/pull/58214 | 0e678fb6c1da1a7ccf305015298f68ec9897298e | 745d9bb47f3425e28e5660ed7c730038ffece4ee | "2023-12-16T01:31:35Z" | c++ | "2023-12-27T14:51:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,902 | ["docs/en/operations/system-tables/server_settings.md", "src/Core/ServerSettings.h", "src/Storages/System/StorageSystemServerSettings.cpp"] | Mark every setting in ServerSettings as hot-reloadable or not. | **Use case**
In order not to restart the server each time for setting change ClickHouse support dynamic reload of the configuration.
But unfortunately not all the changes there can be applied at runtime. To make it more convenient for database operators let's expose the information whether a setting can be changed without restart or not to `system.server_settings` table.
**Describe the solution you'd like**
Introduce additional column in `system.server_settings` named `runtime_reload` with type Enum.
Possible values:
- `FULL` - the setting maybe changed to any value.
- `ONLY_INCREASE` - the setting maybe only increased (for example for settings like `background_pool_size`, `background_fetches_pool_size`)
- `NO` - the setting can't be changed without restart.
| https://github.com/ClickHouse/ClickHouse/issues/57902 | https://github.com/ClickHouse/ClickHouse/pull/58029 | 06274c7bc641b10038a696549e384b756654ccea | 6665d23243fa0c5a063022b42892977036f766f3 | "2023-12-15T14:36:25Z" | c++ | "2024-01-08T09:09:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,893 | ["programs/server/config.xml", "src/Formats/NativeWriter.cpp", "src/Storages/StorageMemory.cpp", "tests/queries/0_stateless/02973_backup_of_in_memory_compressed.reference", "tests/queries/0_stateless/02973_backup_of_in_memory_compressed.sh"] | Logical error "Bad cast from type DB::ColumnCompressed to DB::ColumnVector<unsigned long>" in upgrade check |
```
2023.12.14 16:56:19.205987 [ 23952 ] {7ba4d3d0-56b3-43f4-8a56-5825dee631ef} <Error> executeQuery: Code: 49. DB::Exception: Got error from localhost:9000. DB::Exception: Bad cast from type DB::ColumnCompressed to DB::ColumnVector<unsigned long>. Stack trace:
--
7. ./build_docker/./src/Backups/IBackupEntriesLazyBatch.cpp:30: DB::IBackupEntriesLazyBatch::BackupEntryFromBatch::isReference() const @ 0x000000001179c36d in /usr/lib/debug/usr/bin/clickhouse.debug
8. ./build_docker/./src/Backups/BackupFileInfo.cpp:0: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::buildFileInfosForBackupEntries(std::vector<std::pair<String, std::shared_ptr<DB::IBackupEntry const>>, std::allocator<std::pair<String, std::shared_ptr<DB::IBackupEntry const>>>> const&, std::shared_ptr<DB::IBackup const> const&, DB::ReadSettings const&, ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>&)::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000000ffa603a in /usr/lib/debug/usr/bin/clickhouse.debug
9. ./base/base/../base/wide_integer_impl.h:809: ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c5e1fb0 in /usr/lib/debug/usr/bin/clickhouse.debug
10. ./build_docker/./src/Common/ThreadPool.cpp:0: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c5e5d3c in /usr/lib/debug/usr/bin/clickhouse.debug
11. ./base/base/../base/wide_integer_impl.h:809: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c5e45d3 in /usr/lib/debug/usr/bin/clickhouse.debug
12. ? @ 0x00007f30996a2ac3 in ?
13. ? @ 0x00007f3099734a40 in ?
. (LOGICAL_ERROR) (version 23.11.2.11 (official build)) (from [::1]:45282) (comment: 02907_backup_mv_with_no_source_table.sh) (in query: backup database test_7 on cluster test_shard_localhost to Disk('backups', '02907_backup_mv_with_no_source_table_test_72');), Stack trace (when copying this message, always include the lines below):
0. ./build_docker/./src/Common/Exception.cpp:97: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c4fd597 in /usr/lib/debug/usr/bin/clickhouse.debug
1. ./contrib/llvm-project/libcxx/include/string:1499: DB::readException(DB::ReadBuffer&, String const&, bool) @ 0x000000000c563b9f in /usr/lib/debug/usr/bin/clickhouse.debug
2. ./contrib/llvm-project/libcxx/include/string:1499: DB::BackupCoordinationStageSync::readCurrentState(std::vector<String, std::allocator<String>> const&, std::vector<String, std::allocator<String>> const&, String const&) const @ 0x000000000ffe4597 in /usr/lib/debug/usr/bin/clickhouse.debug
3. ./contrib/llvm-project/libcxx/include/vector:951: DB::BackupCoordinationStageSync::waitImpl(std::vector<String, std::allocator<String>> const&, String const&, std::optional<std::chrono::duration<long long, std::ratio<1l, 1000l>>>) const @ 0x000000000ffde11c in /usr/lib/debug/usr/bin/clickhouse.debug
4. ./build_docker/./src/Backups/BackupCoordinationRemote.cpp:275: DB::BackupCoordinationRemote::waitForStage(String const&) @ 0x000000000ffabcc3 in /usr/lib/debug/usr/bin/clickhouse.debug
5. ./contrib/llvm-project/libcxx/include/vector:434: DB::BackupsWorker::doBackup(std::shared_ptr<DB::ASTBackupQuery> const&, String const&, String const&, DB::BackupInfo const&, DB::BackupSettings, std::shared_ptr<DB::IBackupCoordination>, std::shared_ptr<DB::Context const> const&, std::shared_ptr<DB::Context>, bool) @ 0x000000000ff97d00 in /usr/lib/debug/usr/bin/clickhouse.debug
6. ./build_docker/./src/Backups/BackupsWorker.cpp:0: DB::BackupsWorker::startMakingBackup(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context const> const&) @ 0x000000000ff92ff4 in /usr/lib/debug/usr/bin/clickhouse.debug
7. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:606: DB::BackupsWorker::start(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>) @ 0x000000000ff928a3 in /usr/lib/debug/usr/bin/clickhouse.debug
8. ./build_docker/./src/Interpreters/InterpreterBackupQuery.cpp:0: DB::InterpreterBackupQuery::execute() @ 0x0000000010ec64a0 in /usr/lib/debug/usr/bin/clickhouse.debug
9. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000011333ea2 in /usr/lib/debug/usr/bin/clickhouse.debug
10. ./build_docker/./src/Interpreters/executeQuery.cpp:1287: DB::executeQuery(String const&, std::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000001132dbba in /usr/lib/debug/usr/bin/clickhouse.debug
11. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x0000000012238829 in /usr/lib/debug/usr/bin/clickhouse.debug
12. ./build_docker/./src/Server/TCPHandler.cpp:2294: DB::TCPHandler::run() @ 0x000000001224d1b9 in /usr/lib/debug/usr/bin/clickhouse.debug
13. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x0000000014c71eb2 in /usr/lib/debug/usr/bin/clickhouse.debug
14. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x0000000014c72cb1 in /usr/lib/debug/usr/bin/clickhouse.debug
15. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:202: Poco::PooledThread::run() @ 0x0000000014d69b47 in /usr/lib/debug/usr/bin/clickhouse.debug
16. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000014d6813c in /usr/lib/debug/usr/bin/clickhouse.debug
17. ? @ 0x00007f30996a2ac3 in ?
18. ? @ 0x00007f3099734a40 in ?
``` | https://github.com/ClickHouse/ClickHouse/issues/57893 | https://github.com/ClickHouse/ClickHouse/pull/59315 | a7483ec10b04e9d87fef33a59ebcce152e5286ea | 2cb2bcfbc3d1aee8edaaf2121502c875f9a28f22 | "2023-12-15T10:29:06Z" | c++ | "2024-01-29T00:50:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,817 | ["src/Storages/StorageMerge.cpp", "src/Storages/StorageMerge.h", "tests/queries/0_stateless/02918_optimize_count_for_merge_tables.reference", "tests/queries/0_stateless/02918_optimize_count_for_merge_tables.sql"] | Support trivial count optimization for `Merge` tables | ```sql
DROP TABLE IF EXISTS mt1;
DROP TABLE IF EXISTS mt2;
DROP TABLE IF EXISTS merge;
CREATE TABLE mt1 (id UInt64) ENGINE = MergeTree ORDER BY id;
CREATE TABLE mt2 (id UInt64) ENGINE = MergeTree ORDER BY id;
CREATE TABLE merge (id UInt64) ENGINE = Merge(currentDatabase(), '^mt[0-9]+$');
INSERT INTO mt1 VALUES (1);
INSERT INTO mt2 VALUES (1);
```
```sql
EXPLAIN SELECT count() FROM mt1;
```
```
ββexplainβββββββββββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY)) β
β MergingAggregated β
β ReadFromPreparedSource (Optimized trivial count) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
```sql
EXPLAIN SELECT count() FROM merge;
```
```
ββexplainββββββββββββββββββββββββββββββββββββββ
β Expression ((Projection + Before ORDER BY)) β
β Aggregating β
β Expression (Before GROUP BY) β
β ReadFromMerge β
βββββββββββββββββββββββββββββββββββββββββββββββ
```
**Describe the solution you'd like**
Support trivial count optimization (setting `optimize_trivial_count_query`) in tables with `Merge` engine if all their underlying tables supports such optimization.
**Additional context**
See methods `IStorage::supportsTrivialCountOptimization` and `IStorage::totalRows`. | https://github.com/ClickHouse/ClickHouse/issues/57817 | https://github.com/ClickHouse/ClickHouse/pull/57867 | 787f1e7ab86ab4daee9148fcbea3caa305f67a90 | fc67d2c0e984098e492c1111c8b5e3c705a80e86 | "2023-12-13T14:46:53Z" | c++ | "2023-12-17T09:45:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,816 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayFold.cpp", "tests/performance/array_fold.xml", "tests/queries/0_stateless/02718_array_fold.reference", "tests/queries/0_stateless/02718_array_fold.sql"] | arrayFold of arrayIntersect segfault | This little query breaks clickhouse. It works on arm64 macos local, it works on amd64 linux server, it works on fresh installed version.
```
$ clickhouse local
ClickHouse local version 23.12.1.457 (official build).
air.local :) SELECT arrayFold(acc, x -> arrayIntersect(acc, x), [['qwe', 'asd'], ['qwe','asde']], [])
SELECT arrayFold((acc, x) -> arrayIntersect(acc, x), [['qwe', 'asd'], ['qwe', 'asde']], [])
Query id: 660a9bd1-2975-49b8-ba29-35bcfce5901c
Abort trap: 6
``` | https://github.com/ClickHouse/ClickHouse/issues/57816 | https://github.com/ClickHouse/ClickHouse/pull/57836 | b721f027ae5c19603c6e7b08722ca5f5b6bc44e5 | e4901323fd857ec8e2337fa7314b481aca00ea44 | "2023-12-13T14:42:52Z" | c++ | "2023-12-14T06:15:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,773 | ["src/Processors/Transforms/ColumnGathererTransform.cpp", "src/Processors/Transforms/ColumnGathererTransform.h", "src/Storages/MergeTree/MergeTask.cpp", "tests/queries/0_stateless/02981_vertical_merges_memory_usage.reference", "tests/queries/0_stateless/02981_vertical_merges_memory_usage.sql"] | Vertical merge could consume a lot of memory | See the stack trace with memory allocations.
https://pastila.nl/?0001d4ae/376da778773f6223011c64914c06ec3e#xGswcayE+r52tA2oBpGs6A==
Stack trace in short:
There are big allocations (up to 2,5Gb) happen in
```
Allocator<false, false>, 63ul, 64ul>::resize<>(unsigned long)
DB::ColumnString::insertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) | DB::ColumnString::insertFrom(DB::IColumn const&, unsigned long)
DB::ColumnString::gather(DB::ColumnGathererStream&)','DB::ColumnGathererStream::merge()
```
That happen because the value size is highly different. Table has a string column with
```
SELECT quantilesExactExclusive(0.5, 0.9, 0.99, 0.999)(length(column))
FROM table
ββquantilesExactExclusive(0.5, 0.9, 0.99, 0.999)(length(column))ββ
β [79064,121150,135871,1413296] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
```
SELECT
max(length(column))
FROM table
ββmax(length(column))ββ
β 9834237 β
βββββββββββββββββββββββ
```
There is a big string could happen.
ColumnGathererStream doesn't care about column size in bytes. It always takes `#define DEFAULT_BLOCK_SIZE 65409` rows in gathered column.
That is very annoying reason why 8Gb memory is not enough for small databases. it leads to the situation when all data is read into the memory. | https://github.com/ClickHouse/ClickHouse/issues/57773 | https://github.com/ClickHouse/ClickHouse/pull/59340 | dfc761ce5f4ffd360c72c37a53ffe27eebe5d86f | 50532c485fdd28a22178aa1bcf38663399a894be | "2023-12-12T12:54:20Z" | c++ | "2024-01-30T02:42:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,736 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "src/Interpreters/replaceForPositionalArguments.cpp", "tests/queries/0_stateless/01162_strange_mutations.sh", "tests/queries/0_stateless/01798_having_push_down.sql", "tests/queries/0_stateless/02006_test_positional_arguments.reference", "tests/queries/0_stateless/02006_test_positional_arguments.sql", "tests/queries/0_stateless/02932_group_by_null_fuzzer.sql"] | Negative positional arguments. | **Use case**
`ORDER BY -1` means order by the last column in the SELECT list; negative numbers count from the end of the SELECT list. | https://github.com/ClickHouse/ClickHouse/issues/57736 | https://github.com/ClickHouse/ClickHouse/pull/57741 | b31b4c932f78c8ea4f65657f88d65b494de15db0 | 3d846800e0bdd94916ed8b8faf1c1bc7868ca933 | "2023-12-11T04:53:28Z" | c++ | "2023-12-12T13:16:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,692 | ["src/Processors/Transforms/AggregatingTransform.cpp", "tests/queries/0_stateless/02941_projections_external_aggregation.reference", "tests/queries/0_stateless/02941_projections_external_aggregation.sql"] | Wrong result of external aggregation in case of partially materialized projection | Consider the following SQL script (based on test `01710_projections_partial_optimize_aggregation_in_order`):
```sql
DROP TABLE IF EXISTS in_order_agg_partial_01710;
CREATE TABLE in_order_agg_partial_01710
(
k1 UInt32,
k2 UInt32,
k3 UInt32,
value UInt32
)
ENGINE = MergeTree
ORDER BY tuple();
INSERT INTO in_order_agg_partial_01710 SELECT 1, number%2, number%4, number FROM numbers(50000);
SYSTEM STOP MERGES in_order_agg_partial_01710;
ALTER TABLE in_order_agg_partial_01710 ADD PROJECTION aaaa (
SELECT
k1,
k2,
k3,
sum(value)
GROUP BY k1, k2, k3
);
INSERT INTO in_order_agg_partial_01710 SELECT 1, number%2, number%4, number FROM numbers(100000) LIMIT 50000, 100000;
SELECT '*** correct aggregation ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3 SETTINGS optimize_use_projections = 0;
SELECT '*** correct aggregation with projection ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3;
SELECT '*** optimize_aggregation_in_order = 0, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3 SETTINGS optimize_aggregation_in_order = 0, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1;
SELECT '*** optimize_aggregation_in_order = 1, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3 SETTINGS optimize_aggregation_in_order = 1, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1;
SYSTEM START MERGES in_order_agg_partial_01710;
ALTER TABLE in_order_agg_partial_01710 MATERIALIZE PROJECTION aaaa SETTINGS mutations_sync = 2;
SELECT '*** after materialization ***';
SELECT '*** correct aggregation ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3 SETTINGS optimize_use_projections = 0;
SELECT '*** correct aggregation with projection ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3;
SELECT '*** optimize_aggregation_in_order = 0, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3 SETTINGS optimize_aggregation_in_order = 0, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1;
SELECT '*** optimize_aggregation_in_order = 1, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***';
SELECT k1, k2, k3, sum(value) v FROM in_order_agg_partial_01710 GROUP BY k1, k2, k3 ORDER BY k1, k2, k3 SETTINGS optimize_aggregation_in_order = 1, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1;
```
It gives us the following output:
```
*** correct aggregation ***
1 0 0 1249950000
1 0 2 1250000000
1 1 1 1249975000
1 1 3 1250025000
*** correct aggregation with projection ***
1 0 0 1249950000
1 0 2 1250000000
1 1 1 1249975000
1 1 3 1250025000
*** optimize_aggregation_in_order = 0, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***
1 0 0 937475000
1 0 2 937500000
1 1 1 937487500
1 1 3 937512500
*** optimize_aggregation_in_order = 1, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***
1 0 0 312475000
1 0 2 312500000
1 1 1 312487500
1 1 3 312512500
*** after materialization ***
*** correct aggregation ***
1 0 0 1249950000
1 0 2 1250000000
1 1 1 1249975000
1 1 3 1250025000
*** correct aggregation with projection ***
1 0 0 1249950000
1 0 2 1250000000
1 1 1 1249975000
1 1 3 1250025000
*** optimize_aggregation_in_order = 0, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***
1 0 0 1249950000
1 0 2 1250000000
1 1 1 1249975000
1 1 3 1250025000
*** optimize_aggregation_in_order = 1, max_bytes_before_external_group_by = 1, group_by_two_level_threshold = 1 ***
1 0 0 1249950000
1 0 2 1250000000
1 1 1 1249975000
1 1 3 1250025000
```
You can see that with enabed two level and external aggregation there are wrong query results after we've added a projection which is materialized only in one part. Note that result may differ according to the value of `optimize_aggregation_in_order` setting but is wrong anyway. | https://github.com/ClickHouse/ClickHouse/issues/57692 | https://github.com/ClickHouse/ClickHouse/pull/57790 | b8d274d070b89bdfee578492f8210cd96859fdd8 | 82ebb5e2d1ea81834f8661eec1bf8635ee154f7b | "2023-12-08T18:42:21Z" | c++ | "2023-12-13T21:56:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,686 | ["src/AggregateFunctions/AggregateFunctionSumMap.cpp", "tests/queries/0_stateless/02480_max_map_null_totals.reference", "tests/queries/0_stateless/02480_max_map_null_totals.sql"] | Wrong value of `TOTALS` in function `maxMap` | ```sql
SELECT maxMap([number % 3, (number % 4) - 1], [number, NULL])
FROM numbers(3)
GROUP BY number
WITH TOTALS
ORDER BY number ASC;
```
```
ββmaxMap(array(modulo(number, 3), minus(modulo(number, 4), 1)), array(number, NULL))ββ
β ([-1,0],[0,0]) β
β ([0,1],[0,1]) β
β ([1,2],[0,2]) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Totals:
ββmaxMap(array(modulo(number, 3), minus(modulo(number, 4), 1)), array(number, NULL))ββ
β ([-1,0,1,2],[0,0,0,2]) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
We should have totals: `[-1,0,1,2],[0,0,1,2]` because we have value `1` for key `1` in the second row. This behaviour is reflected in test `02480_max_map_null_totals`, but it's wrong.
However it works correctly with settings `group_by_two_level_threshold = 1, max_bytes_before_external_group_by = 1`:
```sql
SELECT maxMap([number % 3, (number % 4) - 1], [number, NULL])
FROM numbers(3)
GROUP BY number
WITH TOTALS
ORDER BY number ASC
SETTINGS group_by_two_level_threshold = 1, max_bytes_before_external_group_by = 1;
```
```
ββmaxMap(array(modulo(number, 3), minus(modulo(number, 4), 1)), array(number, NULL))ββ
β ([-1,0],[0,0]) β
β ([0,1],[0,1]) β
β ([1,2],[0,2]) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Totals:
ββmaxMap(array(modulo(number, 3), minus(modulo(number, 4), 1)), array(number, NULL))ββ
β ([-1,0,1,2],[0,0,1,2]) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/57686 | https://github.com/ClickHouse/ClickHouse/pull/57795 | 6abe4b113ac82e2260ee09f3bf02bed5507a42ec | c2f32f599426633a4451b8df35be120a5fde37a2 | "2023-12-08T15:58:15Z" | c++ | "2023-12-13T09:31:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,623 | ["src/Interpreters/InterpreterCreateQuery.cpp", "src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/02174_cte_scalar_cache_mv.reference", "tests/queries/0_stateless/02943_create_query_interpreter_sample_block_fix.reference", "tests/queries/0_stateless/02943_create_query_interpreter_sample_block_fix.sql"] | Materialized view extractAll (and other string function) regression since CH 23.9 | **Describe the unexpected behaviour**
Function `extractAll` expects a const String as second parameters. When executing a query like this one, the output of the CTE is considered constant and everything works fine. But if that query is turned out into a MV the CTE's output is no considered constant anymore.
```sql
WITH coalesce((
SELECT reg
FROM extract_all.regex
), '') AS val
SELECT
extractAll(concat(toString(number), 'a'), assumeNotNull(val))
FROM extract_all.ds;
```
**How to reproduce**
This is the full reproducer:
```sql
DROP DATABASE IF EXISTS extract_all;
CREATE DATABASE extract_all;
CREATE TABLE extract_all.ds (
number UInt32
)
ENGINE=MergeTree
ORDER BY number AS
SELECT 1;
CREATE TABLE extract_all.ds_2 (
arr Array(String)
)
ENGINE=MergeTree
ORDER BY tuple();
CREATE TABLE extract_all.regex
(
`reg` String
)
ENGINE = MergeTree
ORDER BY tuple() AS
SELECT '\d[a-z]';
SELECT '-- Query by itself';
WITH coalesce((
SELECT reg
FROM extract_all.regex
), '') AS val
SELECT
extractAll(concat(toString(number), 'a'), assumeNotNull(val))
FROM extract_all.ds;
SELECT '-- MV';
CREATE MATERIALIZED VIEW extract_all.mv TO extract_all.ds_2
AS
WITH coalesce((
SELECT reg
FROM extract_all.regex
), '') AS val
SELECT
extractAll(concat(toString(number), 'a'), assumeNotNull(val)) AS arr
FROM extract_all.ds;
```
Everything works fine when using CH 23.8 and starts failing from 23.9 onwards.
**Expected behavior**
The MV should work the same as in version 23.8, interpreting CTE's output as constant. | https://github.com/ClickHouse/ClickHouse/issues/57623 | https://github.com/ClickHouse/ClickHouse/pull/57855 | b9408125cc91ecdc61a2dc09785e10fe61d62bd6 | 5290b3c9ce37fd8dc6c4fee091697b00cb48c9bd | "2023-12-07T15:39:22Z" | c++ | "2023-12-18T16:22:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,516 | ["docs/en/sql-reference/functions/string-functions.md", "docs/en/sql-reference/functions/string-replace-functions.md", "src/Functions/concat.cpp", "src/Functions/concatWithSeparator.cpp", "src/Functions/format.cpp", "tests/queries/0_stateless/02935_format_with_arbitrary_types.reference", "tests/queries/0_stateless/02935_format_with_arbitrary_types.sql"] | Support arbitrary arguments in function `format()` | In the spirit of PR #56540, function [format()](https://clickhouse.com/docs/en/sql-reference/functions/string-functions#format) should support arguments of arbitrary type.
Basically make this work:
```sql
SELECT format('The {0} to all questions is {1}', 'answer', 42);
``` | https://github.com/ClickHouse/ClickHouse/issues/57516 | https://github.com/ClickHouse/ClickHouse/pull/57549 | 6213593206661d882adb984daf5a250a3cc2681e | c141dd1330b529396443909beb1acf123397e66d | "2023-12-05T12:25:34Z" | c++ | "2023-12-11T11:33:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,458 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayFold.cpp", "tests/performance/array_fold.xml", "tests/queries/0_stateless/02718_array_fold.reference", "tests/queries/0_stateless/02718_array_fold.sql"] | position function behaves incorrectly when used with arrayFold | **Describe what's wrong**
`position` function behaves incorrectly when used with `arrayFold` - fails to search string. I've verified all other intermediate states of `arrayFold` and other functions used are correct.
https://fiddle.clickhouse.com/80adfdb5-f671-4d8a-bb27-49d6fdfdada7
`position` function seems to fail string search and return 0 (substring not found).
**Does it reproduce on recent release?**
Confirmed on 23.10.3.1 on server as well as clickhouse-local
**How to reproduce**
refer to clickhouse fiddle - has a minimal reproducible setup
**Expected behavior**
It should return the correct position of the substring.
**Additional context**
I am using the nix version of clickhouse but seeing it's also reproducible on fiddle, I don't think that's relevant to the issue.
---
*offtopic: I spent multiple hours figuring out why it isn't working. If you're wondering how I found this bug, I was trying to solve this years' advent of code with clickhouse - helps to learn a lot of the cool functions ClickHouse offers! (also I'm slightly deranged).*
| https://github.com/ClickHouse/ClickHouse/issues/57458 | https://github.com/ClickHouse/ClickHouse/pull/57836 | b721f027ae5c19603c6e7b08722ca5f5b6bc44e5 | e4901323fd857ec8e2337fa7314b481aca00ea44 | "2023-12-03T18:37:33Z" | c++ | "2023-12-14T06:15:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,373 | ["src/Common/Arena.h", "tests/queries/0_stateless/02931_ubsan_error_arena_aligned_alloc.reference", "tests/queries/0_stateless/02931_ubsan_error_arena_aligned_alloc.sql"] | ubsan error in `sumResample` runtime error: applying non-zero offset 7 to null pointer | ``` sql
SELECT sumResample(65535, 20, 1)(number, number % 20) FROM numbers(200)
```
``` sh
1467564- #0 0x55edee203138 in std::__1::align(unsigned long, unsigned long, void*&, unsigned long&) build_docker/./contrib/llvm-project/libcxx/src/memory.cpp:197:72
1467565- #1 0x55edd0efcda0 in DB::Arena::alignedAlloc(unsigned long, unsigned long) (/workspace/clickhouse+0x18da1da0) (BuildId: 34961d56328cefb6a2988ccf823005f66f599fa0)
1467566- #2 0x55ede7dd0bd6 in DB::Aggregator::executeOnBlock(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, std::__1::vector<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>, std::__1::allocator<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>>>&, bool&) const build_docker/./src/Interpreters/Aggregator.cpp:1628:58
1467567- #3 0x55edeaf88a5f in DB::AggregatingTransform::consume(DB::Chunk) build_docker/./src/Processors/Transforms/AggregatingTransform.cpp:672:33
1467568- #4 0x55edeaf711f2 in DB::AggregatingTransform::work() build_docker/./src/Processors/Transforms/AggregatingTransform.cpp:631:9
1467569- #5 0x55edea9d94af in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:47:26
1467570- #6 0x55edea9d94af in DB::ExecutionThreadContext::executeTask() build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:95:9
1467571- #7 0x55edea9c955a in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273:26
1467572- #8 0x55edea9c955a in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:239:5
1467573- #9 0x55edea9ca582 in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const build_docker/./src/Processors/Executors/PipelineExecutor.cpp:373:17
1467574- #10 0x55edea9ca582 in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
1467575- #11 0x55edea9ca582 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
1467576- #12 0x55edea9ca582 in std::__1::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
1467577- #13 0x55edea9ca582 in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
1467578- #14 0x55eddb92c35a in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
1467579- #15 0x55eddb92c35a in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
1467580- #16 0x55eddb92c35a in ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__1::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) build_docker/./src/Common/ThreadPool.cpp:421:13
1467581- #17 0x55eddb931393 in void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:183:73
1467582- #18 0x55eddb931393 in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()&>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
1467583- #19 0x55eddb931393 in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()&, std::__1::tuple<>&>(void&&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/tuple:1789:1
1467584- #20 0x55eddb931393 in decltype(auto) std::__1::apply[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()&, std::__1::tuple<>&>(void&&, std::__1::tuple<>&) build_docker/./contrib/llvm-project/libcxx/include/tuple:1798:1
1467585- #21 0x55eddb931393 in ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()::operator()() build_docker/./src/Common/ThreadPool.h:250:13
1467586- #22 0x55eddb92967a in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
1467587- #23 0x55eddb92967a in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
1467588- #24 0x55eddb92967a in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/./src/Common/ThreadPool.cpp:421:13
1467589- #25 0x55eddb92efa9 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:183:73
1467590- #26 0x55eddb92efa9 in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
1467591- #27 0x55eddb92efa9 in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/thread:284:5
1467592- #28 0x55eddb92efa9 in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/./contrib/llvm-project/libcxx/include/thread:295:5
1467593- #29 0x7f6e57af5ac2 in start_thread nptl/pthread_create.c:442:8
1467594- #30 0x7f6e57b87a3f misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
1467595-
1467596:SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /build/contrib/llvm-project/libcxx/src/memory.cpp:197:72 in
```
| https://github.com/ClickHouse/ClickHouse/issues/57373 | https://github.com/ClickHouse/ClickHouse/pull/57407 | 1f8031c6e1a1e2bd3e43d2413467431fa6617ab1 | e664e66a9a5532bee7ff0533348b041fc2be556f | "2023-11-29T21:10:13Z" | c++ | "2023-12-01T02:53:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,256 | ["src/Interpreters/convertFieldToType.cpp", "tests/queries/0_stateless/02933_compare_with_bool_as_string.reference", "tests/queries/0_stateless/02933_compare_with_bool_as_string.sql"] | RFC: Automatic cast of bool literals in obvious cases | ```sql
create table tab (d Date, b bool) engine = Memory;
insert into tab values ('2023-11-27', 'true'); -- insert of string-encoded bool works
select d = '2023-11-27' from tab;
select b = true from tab;
select b = 'true' from tab; -- throws: comparison with string-encoded bool does not work
```
Booleans should at least be able to compare against obvious string literals such as `true'`, `'1'`, `'enabled'` etc.
Related issue: #11630 | https://github.com/ClickHouse/ClickHouse/issues/57256 | https://github.com/ClickHouse/ClickHouse/pull/60160 | f7de95cec333a0795058314237d37c935f75f783 | a1a45ed881f85c2614e20de2ed815734524483e3 | "2023-11-27T11:45:41Z" | c++ | "2024-02-20T16:19:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,253 | ["src/Storages/tests/gtest_transform_query_for_external_database.cpp", "src/Storages/transformQueryForExternalDatabase.cpp"] | _shard_num bug fix | Create a table based on engine PostgreSQL like this
```
'CREATE TABLE IF NOT EXISTS xxx
(
xxxx
) ENGINE = PostgreSQL(own_pg, table='xxx');'
```
When I exec this sql , it will be transfered like below:
Clickhouse :
```
SELECT * FROM proxy.sys_nic WHERE mode !=100 and type=0 and name!='' and _shard_num in (1) ORDER BY _shard_num,name LIMIT 20;
```
Postgres receive:
```
SELECT "id", "type", "mode", "state", "name", "nick_name", "description", "speed", "mac", "pci_no", "exist", "create_by", "create_time", "update_by", "update_time"
FROM "sys_nic" WHERE ("mode" != 100) AND ("type" = 0) AND 1
```
So, Postgres will report a error : The argument to AND must be of type boolean, not of type integer,because of PG can't recognize **and 1** | https://github.com/ClickHouse/ClickHouse/issues/57253 | https://github.com/ClickHouse/ClickHouse/pull/56456 | dc12111ed1b888cf7c25a47affc446d4a7a6fb1b | 7f3a082c0e968d52cbbb68bc8f0dfecfe7c79992 | "2023-11-27T09:52:16Z" | c++ | "2023-11-13T14:25:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,194 | ["src/Processors/QueryPlan/Optimizations/optimizeUseNormalProjection.cpp", "tests/queries/0_stateless/01710_normal_projection_join_plan_fix.reference", "tests/queries/0_stateless/01710_normal_projection_join_plan_fix.sql"] | `Block structure mismatch in UnionStep stream: different number of columns` when projections are used with JOIN | How to reproduce
```sql
CREATE TABLE t1 (id UInt32, s String) Engine = MergeTree ORDER BY id;
CREATE TABLE t2 (id1 UInt32, id2 UInt32) Engine = MergeTree ORDER BY id1;
INSERT INTO t2 SELECT * from generateRandom() LIMIT 100;
ALTER TABLE t2 ADD PROJECTION proj (SELECT id2 ORDER BY id2);
INSERT INTO t2 SELECT * from generateRandom() LIMIT 100;
SELECT s FROM t1 as lhs LEFT JOIN (SELECT * FROM t2 WHERER id2 = 1) as rhs ON lhs.id = rhs.id2;
Received exception from server (version 23.11.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Block structure mismatch in UnionStep stream: different number of columns:
id2 UInt32 UInt32(size = 0)
id UInt32 UInt32(size = 0), s String String(size = 0). (LOGICAL_ERROR)
```
cc @amosbird anything familiar?
| https://github.com/ClickHouse/ClickHouse/issues/57194 | https://github.com/ClickHouse/ClickHouse/pull/57196 | de0876ed6804b23e281521ece4c701209e9299e6 | 4c2de5219d8e92ab738f7a52cdd16fcabbb4b327 | "2023-11-24T12:38:41Z" | c++ | "2023-11-27T08:15:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,184 | ["docs/en/operations/system-tables/databases.md"] | DDL of system.databases | execute SQL: select * from system.databases
just find 6 columns: name, engine, data_path, metadata_path, uuid, comment
execute SQL: show create system.databases
the result is:
```sql
CREATE TABLE system.databases
(
`name` String,
`engine` String,
`data_path` String,
`metadata_path` String,
`uuid` UUID,
`comment` String,
`database` String
)
ENGINE = SystemDatabases
COMMENT 'SYSTEM TABLE is built on the fly.'
```
There's one more column: database
I want to know why | https://github.com/ClickHouse/ClickHouse/issues/57184 | https://github.com/ClickHouse/ClickHouse/pull/57228 | d29092f8af27e953a7ecd1ccc09bc533a4f50722 | a956cec61f350927654b4c03cb5d59e51ab6e0cf | "2023-11-24T07:55:03Z" | c++ | "2023-11-26T01:25:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,151 | ["docs/en/sql-reference/functions/string-functions.md", "src/DataTypes/IDataType.h", "src/Functions/GatherUtils/Sources.h", "src/Functions/substring.cpp", "tests/queries/0_stateless/00493_substring_of_enum.reference", "tests/queries/0_stateless/00493_substring_of_enum.sql"] | MySQL compatibility: substring with enums | Required by Tableau via MySQL interface.
Sample generated query:
```sql
SELECT
CONCAT(
SUBSTRING(`cell_towers`.`radio`, 1, 1024),
SUBSTRING(`cell_towers`.`radio`, 1, 1024)
) AS `Calculation_3276650229452562433`
FROM `cell_towers`
GROUP BY 1
```
Simplified:
```sql
SELECT substring(CAST('foo', 'Enum(\'foo\' = 1)'), 1, 1000)
```
fails with
```
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type Enum8('foo' = 1) of argument of function substring: While processing substring(CAST('foo', 'Enum(\'foo\' = 1)'), 1, 1000). (ILLEGAL_TYPE_OF_ARGUMENT)
```
**Expected behavior**
Treat enums as strings when calling the `substring` function.
**How to reproduce**
* Which ClickHouse server version to use: latest | https://github.com/ClickHouse/ClickHouse/issues/57151 | https://github.com/ClickHouse/ClickHouse/pull/57277 | d4522ee0887d88ab221325b0dcf115e2a66e5a7e | bbd7d0c0572b35444273b50a7a91dfe298e50c2e | "2023-11-23T15:41:17Z" | c++ | "2023-12-13T16:03:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 57,059 | ["src/Functions/bitCount.cpp", "src/Functions/bitHammingDistance.cpp", "tests/queries/0_stateless/02921_bit_hamming_distance_big_int.reference", "tests/queries/0_stateless/02921_bit_hamming_distance_big_int.sql"] | `bitHammingDistance` is wrong for big integer data types. | ```
milovidov-desktop :) SELECT 314776434768051644139306697240981192872::UInt128 AS x, 0::UInt128 AS y, bitCount(bitXor(x, y)) AS a, bitHammingDistance(x, y) AS b
SELECT
CAST('314776434768051644139306697240981192872', 'UInt128') AS x,
CAST('0', 'UInt128') AS y,
bitCount(bitXor(x, y)) AS a,
bitHammingDistance(x, y) AS b
Query id: b7b00e85-2c4e-428c-8698-e15eea5fae15
ββββββββββββββββββββββββββββββββββββββββxββ¬βyββ¬ββaββ¬ββbββ
β 314776434768051644139306697240981192872 β 0 β 74 β 32 β
βββββββββββββββββββββββββββββββββββββββββββ΄ββββ΄βββββ΄βββββ
1 row in set. Elapsed: 0.001 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/57059 | https://github.com/ClickHouse/ClickHouse/pull/57073 | 2880e6437ea2aba73ea4f4946ebef4377cc7c502 | d1015aae8e74831e0a466a34e28a5b6ec0b01a8a | "2023-11-21T14:29:56Z" | c++ | "2023-11-22T10:56:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,995 | ["docs/en/sql-reference/functions/string-functions.md", "src/Functions/concat.cpp", "tests/queries/0_stateless/00727_concat.reference", "tests/queries/0_stateless/00727_concat.sql"] | MySQL compatibility: concat should also accept 1 argument | Tableau's `STR(x)` function uses `concat` under the hood with just one argument to convert the input to a String (instead of `CONVERT` or just `CAST x AS type`).
A calculated field such as:
```
IF ISNULL([User Id]) THEN [Address] ELSE STR([User Id]) END
```
generates the following error:
```
Code: 42. DB::Exception: Number of arguments for function concat doesn't match: passed 1, should be at least 2.: While processing if(userId IS NULL, address, concat(userId)). (NUMBER_OF_ARGUMENTS_DOESNT_MATCH) (version 23.11.1.1442 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/56995 | https://github.com/ClickHouse/ClickHouse/pull/57000 | e34fea5495a6430f9cd69ca4f4482e46242f6fe6 | c5828cf856b61ce92aeaa75eb9c97ac24b1d27bd | "2023-11-20T14:47:47Z" | c++ | "2023-11-21T11:06:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,978 | ["src/IO/S3/URI.cpp", "tests/integration/test_s3_style_link/test.py"] | s3-style URLs don't work | https://pastila.nl/?00a139a3/0b483a6142a9ba07dfcec1c32d6b0fed#3YTUZegWa3J6LQrl0/JOlw== | https://github.com/ClickHouse/ClickHouse/issues/56978 | https://github.com/ClickHouse/ClickHouse/pull/57075 | 30148972ed4422f579030e745a835996b66fcc30 | 823ba2db461b162fd3b849e8b47b1d9f8c0f0357 | "2023-11-19T21:59:41Z" | c++ | "2023-11-29T16:41:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,939 | ["src/Storages/MergeTree/MergeTreeIndexFullText.cpp", "src/Storages/MergeTree/MergeTreeIndexFullText.h", "tests/queries/0_stateless/02943_tokenbf_and_ngrambf_indexes_support_match_function.reference", "tests/queries/0_stateless/02943_tokenbf_and_ngrambf_indexes_support_match_function.sql"] | Add support for bloom filter based indexes for `match`/`REGEXP` | - We have bloom-filter-based skip indexes `ngrambf_v1`/`tokenbf_v1`.
- These indexes are supported by many operators/functions, including `like()`/`has()`/`hasToken()`,
see https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#functions-support
- `match()` function has no optimizations related to bloom-filter-based skip indexes
Use case: user specifies a complex regexp in a `match()` predicate and it is not obvious that performance can be improved if user will add additional `has` condition before `match`.
To implement this: get a list of "required" substrings from the regexp, i.e. strings that are present in every matching string (see `OptimizedRegularExpression::analyze()`), then query the skip indexes with these substrings.
| https://github.com/ClickHouse/ClickHouse/issues/56939 | https://github.com/ClickHouse/ClickHouse/pull/57882 | 2928d8742e0ceb60a4a2165295eadc45f7b32985 | 2166df064021a53d706e5279e5217ba3c48fb455 | "2023-11-17T16:38:51Z" | c++ | "2024-01-04T23:00:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,932 | ["src/Storages/AlterCommands.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/01213_alter_rename_primary_key_zookeeper_long.sql", "tests/queries/0_stateless/02920_alter_column_of_projections.reference", "tests/queries/0_stateless/02920_alter_column_of_projections.sql", "tests/queries/0_stateless/02920_rename_column_of_skip_indices.reference", "tests/queries/0_stateless/02920_rename_column_of_skip_indices.sql"] | Altering column that is part of a projection may lead to loosing data | ```sql
CREATE TABLE users (uid Int16, name String, age Nullable(Int8),
projection p1 (select age, count() group by age)
) ENGINE=MergeTree order by uid;
INSERT INTO users VALUES (1231, 'John', 11);
INSERT INTO users VALUES (6666, 'Ksenia', 1);
INSERT INTO users VALUES (8888, 'Alice', 1);
INSERT INTO users VALUES (6667, 'Ksenia', null);
alter table users modify column age Nullable(Int32) ;
DB::Exception: Exception happened during execution of mutation 'mutation_5.txt' with part 'all_2_2_0' reas
select count() from users;
ββcount()ββ
β 4 β
βββββββββββ
detach table users;
attach table users;
select count() from users;
ββcount()ββ
β 1 β
βββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/56932 | https://github.com/ClickHouse/ClickHouse/pull/56948 | 50dacbc7acabe87a8cbf159030b6c226fd096efb | 2150308c2317d9b889b14e167398946042b4035c | "2023-11-17T14:28:35Z" | c++ | "2023-12-03T03:24:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,879 | ["src/Client/ClientBase.cpp", "tests/queries/0_stateless/02003_memory_limit_in_client.reference", "tests/queries/0_stateless/02003_memory_limit_in_client.sh"] | The `max_memory_usage_in_client` command line option should support a string value with a suffix (K, M, G, etc). | null | https://github.com/ClickHouse/ClickHouse/issues/56879 | https://github.com/ClickHouse/ClickHouse/pull/57273 | a987fff63074e81fed046339eaf06bfc49abc3b4 | 0e563e652c3470af9dc540c3673752215945b962 | "2023-11-16T19:04:14Z" | c++ | "2023-11-29T17:52:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,762 | ["src/AggregateFunctions/AggregateFunctionLargestTriangleThreeBuckets.cpp", "tests/queries/0_stateless/02842_largestTriangleThreeBuckets_aggregate_function.reference", "tests/queries/0_stateless/02842_largestTriangleThreeBuckets_aggregate_function.sql"] | The last two points of largestTriangleThreeBuckets (lttb) have a large x-axis gap | ClickHouse version:
23.10.3.5
https://fiddle.clickhouse.com/9f7212b8-f033-4bae-aa0e-439d939782b6
```sql
CREATE TABLE lttb_test
(
x UInt32,
y UInt32
) ENGINE = MergeTree ORDER BY x;
INSERT INTO lttb_test (x, y) SELECT (number + 1) AS x, (x % 1000) AS y FROM numbers(9999);
SELECT
arrayJoin(lttb(1000)(x, y)) AS point,
tupleElement(point, 1) AS point_x,
point_x - neighbor(point_x, -1) AS point_x_diff_with_previous_row
FROM lttb_test LIMIT 990, 10;
```
Result:
```sql
ββpointβββββββ¬βpoint_xββ¬βpoint_x_diff_with_previous_rowββ
β (8911,911) β 8911 β 9 β
β (8920,920) β 8920 β 9 β
β (8929,929) β 8929 β 9 β
β (8938,938) β 8938 β 9 β
β (8947,947) β 8947 β 9 β
β (8956,956) β 8956 β 9 β
β (8965,965) β 8965 β 9 β
β (8974,974) β 8974 β 9 β
β (8991,991) β 8991 β 17 β
β (9999,999) β 9999 β 1008 β <-- The last two points have a large x-axis gap
ββββββββββββββ΄ββββββββββ΄βββββββββββββββββββββββββββββββββ
```
Plotting a line graph using X and Y of lttb yields the following results. Maybe we can make the x-axis sampling a little more even.

| https://github.com/ClickHouse/ClickHouse/issues/56762 | https://github.com/ClickHouse/ClickHouse/pull/57003 | a1e8c064337a2aa0313e48d5d475e513a7f54c8f | add243593d950c46eb5ec46a780119533d9411b1 | "2023-11-14T17:34:49Z" | c++ | "2023-12-07T11:29:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,729 | ["src/Storages/tests/gtest_transform_query_for_external_database.cpp", "src/Storages/transformQueryForExternalDatabase.cpp"] | clickhouse adapts to postgresql | When I use the ch table and the table imported by the PostgreSql table (pg_algo_input_customer) engine as views, I filter the fields in pg_algo_input_customer. Clickhouse embedded the condition in the pg_algo_input_customer table sql as where 1, but the syntax of where 1 is not supported in pgsql, how do I make this view sql valid using the pg table engine?
The sql statementοΌ
select date, customer, level
from pg_algo_input_customer
where level != 'ZZ'.
The view contains the 'level' above, And when i used 'level' column, ch embeds 'and 1' into sql: WHERE ("level" != 'ZZ') AND 1), then throw this error 'std::exception. Code: 1001, type: pqxx::sql_error, e.what() = ERROR: argument of AND must be type boolean, not type integer'.

```[tasklist]
### Tasks
```
| https://github.com/ClickHouse/ClickHouse/issues/56729 | https://github.com/ClickHouse/ClickHouse/pull/56456 | dc12111ed1b888cf7c25a47affc446d4a7a6fb1b | 7f3a082c0e968d52cbbb68bc8f0dfecfe7c79992 | "2023-11-14T09:16:34Z" | c++ | "2023-11-13T14:25:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,682 | ["tests/queries/0_stateless/02903_rmt_retriable_merge_exception.sh"] | `02903_rmt_retriable_merge_exception` is a bit flaky | It looks like it reproduces mostly with Database Replicated tests
https://s3.amazonaws.com/clickhouse-test-reports/56401/86685685d36c4a07c631b84589fcd34004a3877f/stateless_tests__release__databasereplicated__[4_4].html
CI DB: [link](https://play.clickhouse.com/play?user=play#CldJVEgKICAgICcwMjkwM19ybXRfcmV0cmlhYmxlX21lcmdlX2V4Y2VwdGlvbicgQVMgbmFtZV9zdWJzdHIsCiAgICA5MCBBUyBpbnRlcnZhbF9kYXlzLAogICAgKCdTdGF0ZWxlc3MgdGVzdHMgKGFzYW4pJywgJ1N0YXRlbGVzcyB0ZXN0cyAoYWRkcmVzcyknLCAnU3RhdGVsZXNzIHRlc3RzIChhZGRyZXNzLCBhY3Rpb25zKScpIEFTIGJhY2twb3J0X2FuZF9yZWxlYXNlX3NwZWNpZmljX2NoZWNrcwpTRUxFQ1QKICAgIHRvU3RhcnRPZkRheShjaGVja19zdGFydF90aW1lKSBBUyBkLAogICAgY291bnQoKSwKICAgIGdyb3VwVW5pcUFycmF5KHB1bGxfcmVxdWVzdF9udW1iZXIpIEFTIHBycywKICAgIGFueShyZXBvcnRfdXJsKQpGUk9NIGNoZWNrcwpXSEVSRSAoKG5vdygpIC0gdG9JbnRlcnZhbERheShpbnRlcnZhbF9kYXlzKSkgPD0gY2hlY2tfc3RhcnRfdGltZSkgQU5EIChwdWxsX3JlcXVlc3RfbnVtYmVyIE5PVCBJTiAoCiAgICBTRUxFQ1QgcHVsbF9yZXF1ZXN0X251bWJlciBBUyBwcm4KICAgIEZST00gY2hlY2tzCiAgICBXSEVSRSAocHJuICE9IDApIEFORCAoKG5vdygpIC0gdG9JbnRlcnZhbERheShpbnRlcnZhbF9kYXlzKSkgPD0gY2hlY2tfc3RhcnRfdGltZSkgQU5EIChjaGVja19uYW1lIElOIChiYWNrcG9ydF9hbmRfcmVsZWFzZV9zcGVjaWZpY19jaGVja3MpKQopKSBBTkQgKHBvc2l0aW9uKHRlc3RfbmFtZSwgbmFtZV9zdWJzdHIpID4gMCkgQU5EICh0ZXN0X3N0YXR1cyBJTiAoJ0ZBSUwnLCAnRVJST1InLCAnRkxBS1knKSkKR1JPVVAgQlkgZApPUkRFUiBCWSBkIERFU0MK)
cc: @azat | https://github.com/ClickHouse/ClickHouse/issues/56682 | https://github.com/ClickHouse/ClickHouse/pull/57155 | 5769a88b926c80a1f8ac408da0ee103e753d71a8 | e4faec5ef07eb8550d57af5760c1d090453b0749 | "2023-11-13T16:28:44Z" | c++ | "2023-11-24T18:59:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,487 | ["src/Functions/FunctionBinaryArithmetic.h", "src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02935_ipv6_bit_operations.reference", "tests/queries/0_stateless/02935_ipv6_bit_operations.sql"] | Missing bitwise operator (`bitAnd`) support for IPv6 Native Type and IPv6StringToNum not compatible with IPv6 Data Type | **Describe the issue**
* [Background/Context] As of ClickHouse 23.1 (https://github.com/ClickHouse/ClickHouse/pull/43221), the IPv6 data type is `UInt128` big-endian data type. In the versions before that (22.X), IPv6 dates not support the IPa type was a `FixedString(16)`. This feature is not listed as backwards-incomaptible in the Release docs.
* The `IPv6StringToNum` function still returns a `FixedString(16)` data type instead of an `IPv6` data type.
* Moreover the `bitAnd` operator does not support the IPv6 Data type anymore (It was supported in previous versions). `bitAnd` operator on IPv6 data type is exceptionally useful when it comes to network masking and other IP-related operations.
**How to reproduce**
* Which ClickHouse server versions are incompatible?
* 23.1 onwards [However, I only tested 23.3. and 23.8]
*
```sql
SELECT bitAnd(toIPv4('1.2.3.4'), 20000000); -- works correctly
SELECT bitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0')); -- works correctly
SELECT bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000')); -- does not work
SELECT bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000')); -- does not work
```
**Error message and/or stacktrace**
* ClickHouse 23.3.9
```sql
myserver :) SELECT bitAnd(toIPv4('1.2.3.4'), 20000000);
SELECT bitAnd(toIPv4('1.2.3.4'), 20000000)
Query id: 6f2fb352-fe04-43a1-9882-a1d602e2885d
ββbitAnd(toIPv4('1.2.3.4'), 20000000)ββ
β 16777472 β
βββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.004 sec.
myserver :) SELECT bitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0'))
SELECT bitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0'))
Query id: 24f317f1-0d8a-42b4-9d00-686161ed3f03
ββbitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0'))ββ
β 16908288 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.005 sec.
myserver :) SELECT bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000'))
SELECT bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000'))
Query id: 3da6dc0d-0672-4d80-bc78-d5813dfc09d5
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 23.3.9):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal types IPv6 and IPv6 of arguments of function bitAnd: While processing bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000')). (ILLEGAL_TYPE_OF_ARGUMENT)
myserver :) SELECT bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000'))
SELECT bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000'))
Query id: 4355c453-ad91-4ef7-a5ad-18aded30bf99
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 23.3.9):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal types IPv6 and FixedString(16) of arguments of function bitAnd: While processing bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000')). (ILLEGAL_TYPE_OF_ARGUMENT)
```
* ClickHouse 22.8.21
```sql
myserver :) SELECT bitAnd(toIPv4('1.2.3.4'), 20000000)
SELECT bitAnd(toIPv4('1.2.3.4'), 20000000)
Query id: 131967d4-81d6-4304-abe6-a35d88a67115
ββbitAnd(toIPv4('1.2.3.4'), 20000000)ββ
β 16777472 β
βββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
myserver :) SELECT bitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0'))
SELECT bitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0'))
Query id: 91a8c219-3533-421c-9cc7-d49017a2ee01
ββbitAnd(toIPv4('1.2.3.4'), IPv4StringToNum('1.2.0.0'))ββ
β 16908288 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.002 sec.
myserver :) SELECT IPv6NumToString(bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000')))
SELECT IPv6NumToString(bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000')))
Query id: b0a03283-db12-42b5-bab9-57b8ed8c42bb
ββIPv6NumToString(bitAnd(toIPv6('43:AB::'), toIPv6('ffff:ffff:0000:0000:0000:0000:0000:0000')))ββ
β 43:ab:: β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.004 sec.
myserver :) SELECT IPv6NumToString(bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000')))
SELECT IPv6NumToString(bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000')))
Query id: a6625b0a-0577-4e87-a773-b34112e46753
ββIPv6NumToString(bitAnd(toIPv6('43:AB::'), IPv6StringToNum('ffff:ffff:0000:0000:0000:0000:0000:0000')))ββ
β 43:ab:: β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.003 sec.
```
**Additional context**
N/A
| https://github.com/ClickHouse/ClickHouse/issues/56487 | https://github.com/ClickHouse/ClickHouse/pull/57707 | 40230579e69f466d0f419d361112259be68640a8 | 0e548a4caf3a67f6391d7243e70e40f5456b3871 | "2023-11-09T02:36:18Z" | c++ | "2023-12-12T06:57:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,482 | ["docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml", "programs/server/embedded.xml", "programs/server/users.xml", "tests/integration/test_named_collections/configs/users.d/users_no_default_access.xml", "tests/integration/test_storage_s3/test.py"] | A question about `access_management` | Do you remember why we don't have `access_management` by default?
What if we enable it in the next version as a backward incompatible change? | https://github.com/ClickHouse/ClickHouse/issues/56482 | https://github.com/ClickHouse/ClickHouse/pull/56619 | 6414ceffb3ece5bc1875e6d9a0e481e9c220882e | 32b4d6ccf8191b1ef69df8b56e102678cefd51d9 | "2023-11-08T22:00:43Z" | c++ | "2023-11-18T08:22:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,449 | ["programs/server/Server.cpp", "src/Common/SensitiveDataMasker.cpp", "src/Common/SensitiveDataMasker.h", "src/Interpreters/Context.cpp", "src/Interpreters/Context.h", "tests/integration/test_reload_query_masking_rules/__init__.py", "tests/integration/test_reload_query_masking_rules/configs/changed_settings.xml", "tests/integration/test_reload_query_masking_rules/configs/empty_settings.xml", "tests/integration/test_reload_query_masking_rules/test.py"] | Make `query_masking_rules` reloadable without restart of a server | **Use case**
Currently, if you want to redact new things, you have to restart the clickhouse server to apply the new masking rules. It would be great to be able to take new redacting rules into use without need to restart the server.
**Additional context**
The change seems to be quite simple, so if you think it's OK to be propagated, I would like to implement it myself (I have been poking into that yesterday and I think code-wise it should be pretty much ready).
| https://github.com/ClickHouse/ClickHouse/issues/56449 | https://github.com/ClickHouse/ClickHouse/pull/56573 | 1e464609107105bce48ef990eaacd1a41ddc43eb | 3067ca64df756c9c469bfc71a53686c213a82351 | "2023-11-08T07:14:21Z" | c++ | "2023-11-15T23:27:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,438 | ["docker/test/base/setup_export_logs.sh", "programs/server/users.d/allow_introspection_functions.xml", "programs/server/users.d/allow_introspection_functions.yaml", "tests/config/install.sh", "tests/config/users.d/allow_introspection_functions.yaml"] | `trace_log` exported to the CI database in ClickHouse Cloud should be symbolized | null | https://github.com/ClickHouse/ClickHouse/issues/56438 | https://github.com/ClickHouse/ClickHouse/pull/56613 | 82b41f232ad397da6d92a13bca583ee1a3c8a847 | e1ac4c3bedbbd9a9f314369756e39d9a18cdfb25 | "2023-11-08T03:41:20Z" | c++ | "2023-11-12T01:51:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,428 | ["docs/en/operations/server-configuration-parameters/settings.md", "src/Common/ErrorCodes.cpp", "src/Core/ServerSettings.h", "src/Storages/StorageMaterializedView.cpp", "tests/integration/test_limit_materialized_view_count/__init__.py", "tests/integration/test_limit_materialized_view_count/configs/max_num_limit.xml", "tests/integration/test_limit_materialized_view_count/test.py"] | A limit on the number of materialized views attached to a table. | A limit can be configurable by a query-level setting (the setting could be constrained then).
It can be set to 100 by default. | https://github.com/ClickHouse/ClickHouse/issues/56428 | https://github.com/ClickHouse/ClickHouse/pull/58068 | 24b8bbe9fad03a6591fa6c3871927b0ac8af2070 | 09e24ed6c5bc1d36ffdb4ba7fefc9d22900b0d98 | "2023-11-07T18:36:04Z" | c++ | "2024-01-22T21:50:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,417 | ["src/Processors/QueryPlan/PartsSplitter.cpp", "tests/queries/0_stateless/02867_nullable_primary_key_final.reference", "tests/queries/0_stateless/02867_nullable_primary_key_final.sql"] | allow_nullable_key + Final = incorrect result | ```sql
CREATE TABLE t (
o Nullable(String),
p Nullable(String)
) ENGINE = ReplacingMergeTree
ORDER BY (p, o)
SETTINGS allow_nullable_key = 1, index_granularity = 2;
insert into t select number, null from numbers(10);
select count() from t format Pretty;
+---------+
| count() |
+---------+
| 10 |
+---------+
select count() from t FINAL format Pretty;
+---------+
| count() |
+---------+
| 4 | --<<<--- expected the same result = 10
+---------+
```
https://fiddle.clickhouse.com/14cc46e6-ca3a-4e85-9568-9b4df3fb1567
23.8 https://fiddle.clickhouse.com/048a71e8-52f5-4e81-a5fa-e3d6aa9fd946
```
select count() from t FINAL format Pretty;
+---------+
| count() |
+---------+
| 0 |
+---------+
```
21.8 -- correct result https://fiddle.clickhouse.com/a10eeea9-991d-4716-b4c0-6b4ed4e59673
more repro https://fiddle.clickhouse.com/d3e1d257-e233-455a-a282-1bb43cec904b | https://github.com/ClickHouse/ClickHouse/issues/56417 | https://github.com/ClickHouse/ClickHouse/pull/56452 | b4b3cb2291c1d55af10d4251eb866cd13312e69b | 1e464609107105bce48ef990eaacd1a41ddc43eb | "2023-11-07T13:49:01Z" | c++ | "2023-11-15T21:27:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,414 | ["base/base/Decimal_fwd.h", "src/Functions/FunctionBinaryArithmetic.h", "src/Functions/IsOperation.h", "tests/queries/0_stateless/00700_decimal_arithm.reference", "tests/queries/0_stateless/01717_int_div_float_too_large_ubsan.sql", "tests/queries/0_stateless/02975_intdiv_with_decimal.reference", "tests/queries/0_stateless/02975_intdiv_with_decimal.sql"] | why intDivOrZero result is decimal | SELECT
toDecimal32(161.73,4) sumRevenue,
6962 sumInstall,
if(sumInstall >0,intDivOrZero(sumRevenue,sumInstall) ,0) eachOfferRevenue

| https://github.com/ClickHouse/ClickHouse/issues/56414 | https://github.com/ClickHouse/ClickHouse/pull/59243 | f10a8868e191c622d8f2e82462bcf9a94f5eda59 | 491a4cd1e77c188efe7e0efdeb7ff8ebac6aab4c | "2023-11-07T11:31:36Z" | c++ | "2024-02-19T14:28:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,405 | ["src/Processors/Transforms/buildPushingToViewsChain.cpp", "src/Storages/ColumnsDescription.cpp", "src/Storages/ColumnsDescription.h", "tests/queries/0_stateless/02933_ephemeral_mv.reference", "tests/queries/0_stateless/02933_ephemeral_mv.sql"] | Default value of EPHEMERAL column written by materialized view instead of actual value |
```sql
CREATE TABLE raw
(
name String,
ts String
) ENGINE = MergeTree
ORDER BY (name, ts);
CREATE TABLE parsed
(
name String,
ts_ephemeral Nullable(DateTime64(9)), -- no EPHEMERAL
ts DateTime64(9, 'UTC') MATERIALIZED if(ts_ephemeral IS NULL, date(0), ts_ephemeral),
) ENGINE = MergeTree
ORDER BY (name, ts);
CREATE TABLE parsed_eph
(
name String,
ts_ephemeral Nullable(DateTime64(9)) EPHEMERAL, -- with EPHEMERAL
ts DateTime64(9, 'UTC') MATERIALIZED if(ts_ephemeral IS NULL, date(0), ts_ephemeral),
) ENGINE = MergeTree
ORDER BY (name, ts);
CREATE MATERIALIZED VIEW parse_mv_eph
TO parsed_eph
AS
SELECT
name,
toDateTime64OrNull(ts, 9 ,'UTC') as ts_ephemeral
FROM raw;
CREATE MATERIALIZED VIEW parse_mv
TO parsed
AS
SELECT
name,
toDateTime64OrNull(ts, 9 ,'UTC') as ts_ephemeral
FROM raw;
INSERT INTO raw VALUES ('abc', '1451611580')
SELECT 'input_parsed';
SELECT
name,
toDateTime64OrNull(ts, 9 ,'UTC') as ts_ephemeral
FROM raw;
SELECT 'parsed';
SELECT name, ts FROM parsed;
SELECT 'parsed_eph';
SELECT name, ts FROM parsed_eph;
```
```
input_parsed
abc 2016-01-01 01:26:20.000000000
parsed
abc 2016-01-01 01:26:20.000000000
parsed_eph
abc 1970-01-01 00:00:00.000000000
```
https://fiddle.clickhouse.com/0f10a874-8f3f-464a-a21a-5bf39d8d786a | https://github.com/ClickHouse/ClickHouse/issues/56405 | https://github.com/ClickHouse/ClickHouse/pull/57461 | 05bc8ef1e02b9c7332f08091775b255d191341bf | ac7210b9b30129d6de8478cf8702c3ff523aa2ab | "2023-11-07T09:54:27Z" | c++ | "2023-12-06T21:46:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,344 | ["docs/en/interfaces/formats.md", "src/Formats/NumpyDataTypes.h", "src/Processors/Formats/Impl/NpyRowInputFormat.cpp", "src/Processors/Formats/Impl/NpyRowInputFormat.h", "tests/queries/0_stateless/02895_npy_format.reference", "tests/queries/0_stateless/02895_npy_format.sh", "tests/queries/0_stateless/data_npy/float_16.npy", "tests/queries/0_stateless/data_npy/npy_inf_nan_null.npy"] | Npy format should support 16-bit float by converting it to Float32. | **Use case**
https://pastila.nl/?0e0eb514/37c1d09e8f925eb452046bf272119a78#J7MgyKpFZ5kjSFXSTwCZRQ==
```
wget --tries=100 https://deploy.laion.ai/8f83b608504d46bb81708ec86e912220/embeddings/img_emb/img_emb_1.npy
```
| https://github.com/ClickHouse/ClickHouse/issues/56344 | https://github.com/ClickHouse/ClickHouse/pull/56424 | 4cc2d6baa59ca5bb1f8c2e921d10cc7103c5060a | 1831ecc38f778635cebfcceffa33cd0103c04f90 | "2023-11-05T20:01:15Z" | c++ | "2023-11-16T16:00:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,202 | ["docker/test/stateless/stress_tests.lib", "docker/test/upgrade/run.sh", "src/Storages/MergeTree/MutateTask.cpp", "tests/config/config.d/block_number.xml", "tests/config/install.sh", "tests/queries/0_stateless/02973_block_number_sparse_serialization_and_mutation.reference", "tests/queries/0_stateless/02973_block_number_sparse_serialization_and_mutation.sql"] | Logical error: Invalid number of rows in Chunk column UInt64 position 4: expected 8192, got 2 (`_block_number`) | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/55987/073a6a6f0e8074c1a89f0b6f839e52447f8b8b3e/upgrade_check__tsan_.html\r\n\r\nhttps://s3.amazonaws.com/clickhouse-test-reports/55978/985c74b20f67b15196c69426295a50d55e4d4e32/upgrade_check__tsan_.html\r\n\r\nIt's related to `_block_number` column (introduced in https://github.com/ClickHouse/ClickHouse/pull/47532)\r\n\r\nCan be reproduced by `01079_parallel_alter_add_drop_column_zookeeper` with `allow_experimental_block_number_column=1`:\r\n```\r\ndiff --git a/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh b/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh\r\nindex bfdea95fa9e..60e3a1bc006 100755\r\n--- a/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh\r\n+++ b/tests/queries/0_stateless/01079_parallel_alter_add_drop_column_zookeeper.sh\r\n@@ -15,7 +15,12 @@ done\r\n \r\n \r\n for i in $(seq $REPLICAS); do\r\n- $CLICKHOUSE_CLIENT --query \"CREATE TABLE concurrent_alter_add_drop_$i (key UInt64, value0 UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_alter_add_drop_column', '$i') ORDER BY key SETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192, index_granularity_bytes = '10Mi'\"\r\n+ $CLICKHOUSE_CLIENT --query \"CREATE TABLE concurrent_alter_add_drop_$i (key UInt64, value0 UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/concurrent_alter_add_drop_column', '$i') ORDER BY key\r\n+ SETTINGS max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192, index_granularity_bytes = '10Mi',\r\n+ max_replicated_mutations_in_queue = 1000, number_of_free_entries_in_pool_to_execute_mutation = 0, max_replicated_merges_in_queue = 1000, index_granularity = 8192, index_granularity_bytes = '10Mi', min_bytes_for_wide_part = 0,\r\n+ ratio_of_defaults_for_sparse_serialization = 0.5678088665008545, replace_long_file_name_to_hash = false, max_file_name_length = 66, merge_max_block_size = 23954, prefer_fetch_merged_part_size_threshold = 10737418240,\r\n+ vertical_merge_algorithm_min_rows_to_activate = 1, vertical_merge_algorithm_min_columns_to_activate = 46, min_merge_bytes_to_use_direct_io = 1, allow_vertical_merges_from_compact_to_wide_parts = false, marks_compress_block_size = 58933, primary_key_compress_block_size = 86483,\r\n+ allow_experimental_block_number_column=1\"\r\n done\r\n \r\n $CLICKHOUSE_CLIENT --query \"INSERT INTO concurrent_alter_add_drop_1 SELECT number, number + 10 from numbers(100000)\"\r\n@@ -31,7 +36,7 @@ function alter_thread()\r\n REPLICA=$(($RANDOM % 3 + 1))\r\n ADD=$(($RANDOM % 5 + 1))\r\n $CLICKHOUSE_CLIENT --query \"ALTER TABLE concurrent_alter_add_drop_$REPLICA ADD COLUMN value$ADD UInt32 DEFAULT 42 SETTINGS replication_alter_partitions_sync=0\"; # additionaly we don't wait anything for more heavy concurrency\r\n- DROP=$(($RANDOM % 5 + 1))\r\n+ DROP=$(($RANDOM % 15 + 1))\r\n $CLICKHOUSE_CLIENT --query \"ALTER TABLE concurrent_alter_add_drop_$REPLICA DROP COLUMN value$DROP SETTINGS replication_alter_partitions_sync=0\"; # additionaly we don't wait anything for more heavy concurrency\r\n sleep 0.$RANDOM\r\n done\r\n@@ -63,7 +68,7 @@ export -f optimize_thread;\r\n export -f insert_thread;\r\n \r\n \r\n-TIMEOUT=30\r\n+TIMEOUT=60\r\n \r\n # Sometimes we detach and attach tables\r\n timeout $TIMEOUT bash -c alter_thread 2> /dev/null &\r\n```\r\n\r\n```\r\n2023.11.01 15:02:01.995042 [ 20329 ] {55f70984-4f74-4858-b793-ea79f8d01b67::all_0_2_14_4} <Fatal> : Logical error: 'Invalid number of rows in Chunk column UInt64 position 4: expected 8192, got 2'.\r\n2023.11.01 15:02:01.995898 [ 20761 ] {} <Fatal> BaseDaemon: ########## Short fault info ############\r\n2023.11.01 15:02:01.996121 [ 20761 ] {} <Fatal> BaseDaemon: (version 23.10.1.1, build id: EC9D8BF9A510B6167AE23FBBC122701ACC2A81E2, git hash: ac089003717e3805d9a9da20fe6a28793dde6219) (from thread 20329) Received signal 6\r\n2023.11.01 15:02:01.996290 [ 20761 ] {} <Fatal> BaseDaemon: Signal description: Aborted\r\n2023.11.01 15:02:01.996422 [ 20761 ] {} <Fatal> BaseDaemon: \r\n2023.11.01 15:02:01.996634 [ 20761 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fdcd4bc283c 0x00007fdcd4b72668 0x00007fdcd4b5a4b8 0x0000000014cffdf8 0x0000000014cffe95 0x0000000014d00394 0x000000000b170fd7 0x000000000b2526ae 0x0000000020bfe708 0x0000000020bfe4df 0x00000000205a14b0 0x0000000020c143e0 0x0000000020c14023 0x0000000020c51850 0x0000000020c51521 0x0000000020c33c3e 0x0000000020c3364c 0x0000000020c5cc4b 0x0000000020c5cd37 0x00000000202625c3 0x000000002027b298 0x000000002027b275 0x000000002027b255 0x000000002027b235 0x000000002027b1fd 0x0000000014cbf556 0x0000000014cbe5b5 0x00000000202624c6 0x0000000020266d63 0x0000000020253124 0x000000002072e007 0x000000002072c7fc 0x000000002029beda 0x000000002029cb37 0x00000000202a5f18 0x00000000202a5ef5 0x00000000202a5ed5 0x00000000202a5eb5 0x00000000202a5e7d 0x0000000014d835b6 0x0000000014d82915\r\n2023.11.01 15:02:01.996863 [ 20761 ] {} <Fatal> BaseDaemon: ########################################\r\n2023.11.01 15:02:01.997011 [ 20761 ] {} <Fatal> BaseDaemon: (version 23.10.1.1, build id: EC9D8BF9A510B6167AE23FBBC122701ACC2A81E2, git hash: ac089003717e3805d9a9da20fe6a28793dde6219) (from thread 20329) (query_id: 55f70984-4f74-4858-b793-ea79f8d01b67::all_0_2_14_4) (query: ) Received signal Aborted (6)\r\n2023.11.01 15:02:01.997164 [ 20761 ] {} <Fatal> BaseDaemon: \r\n2023.11.01 15:02:01.997284 [ 20761 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fdcd4bc283c 0x00007fdcd4b72668 0x00007fdcd4b5a4b8 0x0000000014cffdf8 0x0000000014cffe95 0x0000000014d00394 0x000000000b170fd7 0x000000000b2526ae 0x0000000020bfe708 0x0000000020bfe4df 0x00000000205a14b0 0x0000000020c143e0 0x0000000020c14023 0x0000000020c51850 0x0000000020c51521 0x0000000020c33c3e 0x0000000020c3364c 0x0000000020c5cc4b 0x0000000020c5cd37 0x00000000202625c3 0x000000002027b298 0x000000002027b275 0x000000002027b255 0x000000002027b235 0x000000002027b1fd 0x0000000014cbf556 0x0000000014cbe5b5 0x00000000202624c6 0x0000000020266d63 0x0000000020253124 0x000000002072e007 0x000000002072c7fc 0x000000002029beda 0x000000002029cb37 0x00000000202a5f18 0x00000000202a5ef5 0x00000000202a5ed5 0x00000000202a5eb5 0x00000000202a5e7d 0x0000000014d835b6 0x0000000014d82915\r\n2023.11.01 15:02:01.997478 [ 20761 ] {} <Fatal> BaseDaemon: 4. ? @ 0x00007fdcd4bc283c in ?\r\n2023.11.01 15:02:01.997628 [ 20761 ] {} <Fatal> BaseDaemon: 5. ? @ 0x00007fdcd4b72668 in ?\r\n2023.11.01 15:02:01.997773 [ 20761 ] {} <Fatal> BaseDaemon: 6. ? @ 0x00007fdcd4b5a4b8 in ?\r\n2023.11.01 15:02:01.997919 [ 20340 ] {2f84417b-8cc4-493d-95b1-cc41e8ce8d9f::all_0_2_14_4} <Fatal> : Logical error: 'Invalid number of rows in Chunk column UInt64 position 4: expected 8192, got 2'.\r\n2023.11.01 15:02:01.998818 [ 20762 ] {} <Fatal> BaseDaemon: ########## Short fault info ############\r\n2023.11.01 15:02:01.999013 [ 20762 ] {} <Fatal> BaseDaemon: (version 23.10.1.1, build id: EC9D8BF9A510B6167AE23FBBC122701ACC2A81E2, git hash: ac089003717e3805d9a9da20fe6a28793dde6219) (from thread 20340) Received signal 6\r\n2023.11.01 15:02:01.999195 [ 20762 ] {} <Fatal> BaseDaemon: Signal description: Aborted\r\n2023.11.01 15:02:01.999333 [ 20762 ] {} <Fatal> BaseDaemon: \r\n2023.11.01 15:02:01.999566 [ 20762 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fdcd4bc283c 0x00007fdcd4b72668 0x00007fdcd4b5a4b8 0x0000000014cffdf8 0x0000000014cffe95 0x0000000014d00394 0x000000000b170fd7 0x000000000b2526ae 0x0000000020bfe708 0x0000000020bfe4df 0x00000000205a14b0 0x0000000020c143e0 0x0000000020c14023 0x0000000020c51850 0x0000000020c51521 0x0000000020c33c3e 0x0000000020c3364c 0x0000000020c5cc4b 0x0000000020c5cd37 0x00000000202625c3 0x000000002027b298 0x000000002027b275 0x000000002027b255 0x000000002027b235 0x000000002027b1fd 0x0000000014cbf556 0x0000000014cbe5b5 0x00000000202624c6 0x0000000020266d63 0x0000000020253124 0x000000002072e007 0x000000002072c7fc 0x000000002029beda 0x000000002029cb37 0x00000000202a5f18 0x00000000202a5ef5 0x00000000202a5ed5 0x00000000202a5eb5 0x00000000202a5e7d 0x0000000014d835b6 0x0000000014d82915\r\n2023.11.01 15:02:01.999783 [ 20762 ] {} <Fatal> BaseDaemon: ########################################\r\n2023.11.01 15:02:01.999949 [ 20762 ] {} <Fatal> BaseDaemon: (version 23.10.1.1, build id: EC9D8BF9A510B6167AE23FBBC122701ACC2A81E2, git hash: ac089003717e3805d9a9da20fe6a28793dde6219) (from thread 20340) (query_id: 2f84417b-8cc4-493d-95b1-cc41e8ce8d9f::all_0_2_14_4) (query: ) Received signal Aborted (6)\r\n2023.11.01 15:02:02.000112 [ 20762 ] {} <Fatal> BaseDaemon: \r\n2023.11.01 15:02:02.000228 [ 20762 ] {} <Fatal> BaseDaemon: Stack trace: 0x00007fdcd4bc283c 0x00007fdcd4b72668 0x00007fdcd4b5a4b8 0x0000000014cffdf8 0x0000000014cffe95 0x0000000014d00394 0x000000000b170fd7 0x000000000b2526ae 0x0000000020bfe708 0x0000000020bfe4df 0x00000000205a14b0 0x0000000020c143e0 0x0000000020c14023 0x0000000020c51850 0x0000000020c51521 0x0000000020c33c3e 0x0000000020c3364c 0x0000000020c5cc4b 0x0000000020c5cd37 0x00000000202625c3 0x000000002027b298 0x000000002027b275 0x000000002027b255 0x000000002027b235 0x000000002027b1fd 0x0000000014cbf556 0x0000000014cbe5b5 0x00000000202624c6 0x0000000020266d63 0x0000000020253124 0x000000002072e007 0x000000002072c7fc 0x000000002029beda 0x000000002029cb37 0x00000000202a5f18 0x00000000202a5ef5 0x00000000202a5ed5 0x00000000202a5eb5 0x00000000202a5e7d 0x0000000014d835b6 0x0000000014d82915\r\n2023.11.01 15:02:02.000421 [ 20762 ] {} <Fatal> BaseDaemon: 4. ? @ 0x00007fdcd4bc283c in ?\r\n2023.11.01 15:02:02.000571 [ 20762 ] {} <Fatal> BaseDaemon: 5. ? @ 0x00007fdcd4b72668 in ?\r\n2023.11.01 15:02:02.000723 [ 20762 ] {} <Fatal> BaseDaemon: 6. ? @ 0x00007fdcd4b5a4b8 in ?\r\n2023.11.01 15:02:02.198785 [ 20761 ] {} <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:43: DB::abortOnFailedAssertion(String const&) @ 0x0000000014cffdf8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.201905 [ 20762 ] {} <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:43: DB::abortOnFailedAssertion(String const&) @ 0x0000000014cffdf8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.208390 [ 20760 ] {} <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:43: DB::abortOnFailedAssertion(String const&) @ 0x0000000014cffdf8 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.317152 [ 20761 ] {} <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:70: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x0000000014cffe95 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.398071 [ 20762 ] {} <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:70: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x0000000014cffe95 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.404434 [ 20760 ] {} <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:70: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x0000000014cffe95 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.408871 [ 20761 ] {} <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:107: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000014d00394 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.519059 [ 20761 ] {} <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:85: DB::Exception::Exception(String&&, int, bool) @ 0x000000000b170fd7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.569895 [ 20762 ] {} <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:107: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000014d00394 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.577991 [ 20760 ] {} <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:107: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000014d00394 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.728116 [ 20760 ] {} <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:85: DB::Exception::Exception(String&&, int, bool) @ 0x000000000b170fd7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:02.767260 [ 20762 ] {} <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:85: DB::Exception::Exception(String&&, int, bool) @ 0x000000000b170fd7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.571060 [ 20761 ] {} <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:113: DB::Exception::Exception<String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&) @ 0x000000000b2526ae in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.605351 [ 20760 ] {} <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:113: DB::Exception::Exception<String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&) @ 0x000000000b2526ae in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.691564 [ 20761 ] {} <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:75: DB::Chunk::checkNumRowsIsConsistent() @ 0x0000000020bfe708 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.719076 [ 20760 ] {} <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:75: DB::Chunk::checkNumRowsIsConsistent() @ 0x0000000020bfe708 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.806735 [ 20761 ] {} <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:19: DB::Chunk::Chunk(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long) @ 0x0000000020bfe4df in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.833980 [ 20760 ] {} <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:19: DB::Chunk::Chunk(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long) @ 0x0000000020bfe4df in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:04.895018 [ 20762 ] {} <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:113: DB::Exception::Exception<String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&) @ 0x000000000b2526ae in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.011175 [ 20762 ] {} <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:75: DB::Chunk::checkNumRowsIsConsistent() @ 0x0000000020bfe708 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.127502 [ 20762 ] {} <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/Processors/Chunk.cpp:19: DB::Chunk::Chunk(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long) @ 0x0000000020bfe4df in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.128355 [ 20761 ] {} <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeSequentialSource.cpp:208: DB::MergeTreeSequentialSource::generate() @ 0x00000000205a14b0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.152718 [ 20760 ] {} <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeSequentialSource.cpp:208: DB::MergeTreeSequentialSource::generate() @ 0x00000000205a14b0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.269066 [ 20761 ] {} <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Processors/ISource.cpp:139: DB::ISource::tryGenerate() @ 0x0000000020c143e0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.294628 [ 20760 ] {} <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Processors/ISource.cpp:139: DB::ISource::tryGenerate() @ 0x0000000020c143e0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.362006 [ 20761 ] {} <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000020c14023 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.402713 [ 20761 ] {} <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x0000000020c51850 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.438734 [ 20760 ] {} <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000020c14023 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.440054 [ 20761 ] {} <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:95: DB::ExecutionThreadContext::executeTask() @ 0x0000000020c51521 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.442949 [ 20762 ] {} <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeSequentialSource.cpp:208: DB::MergeTreeSequentialSource::generate() @ 0x00000000205a14b0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.509813 [ 20760 ] {} <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x0000000020c51850 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.574309 [ 20760 ] {} <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:95: DB::ExecutionThreadContext::executeTask() @ 0x0000000020c51521 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.587789 [ 20762 ] {} <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Processors/ISource.cpp:139: DB::ISource::tryGenerate() @ 0x0000000020c143e0 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.608030 [ 20761 ] {} <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:272: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000020c33c3e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.717267 [ 20761 ] {} <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:147: DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000020c3364c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.726160 [ 20762 ] {} <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000020c14023 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.783021 [ 20762 ] {} <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x0000000020c51850 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.783994 [ 20760 ] {} <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:272: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000020c33c3e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.820886 [ 20761 ] {} <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PullingPipelineExecutor.cpp:54: DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x0000000020c5cc4b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.847014 [ 20762 ] {} <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:95: DB::ExecutionThreadContext::executeTask() @ 0x0000000020c51521 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.914221 [ 20761 ] {} <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PullingPipelineExecutor.cpp:65: DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x0000000020c5cd37 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:05.999248 [ 20760 ] {} <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:147: DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000020c3364c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.067591 [ 20762 ] {} <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:272: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000020c33c3e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.146341 [ 20760 ] {} <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PullingPipelineExecutor.cpp:54: DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x0000000020c5cc4b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.265712 [ 20762 ] {} <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:147: DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000020c3364c in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.292821 [ 20760 ] {} <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PullingPipelineExecutor.cpp:65: DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x0000000020c5cd37 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.339739 [ 20762 ] {} <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PullingPipelineExecutor.cpp:54: DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x0000000020c5cc4b in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.450763 [ 20762 ] {} <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PullingPipelineExecutor.cpp:65: DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x0000000020c5cd37 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:06.651935 [ 20761 ] {} <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:445: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x00000000202625c3 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:07.249217 [ 20760 ] {} <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:445: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x00000000202625c3 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:07.399732 [ 20762 ] {} <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:445: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x00000000202625c3 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:07.557362 [ 20761 ] {} <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.h:241: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()::operator()() const @ 0x000000002027b298 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:07.889280 [ 20760 ] {} <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.h:241: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()::operator()() const @ 0x000000002027b298 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:08.287300 [ 20762 ] {} <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.h:241: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()::operator()() const @ 0x000000002027b298 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:08.617333 [ 20761 ] {} <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>()()) std::__invoke[abi:v15000]<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>(DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&) @ 0x000000002027b275 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:08.674533 [ 20760 ] {} <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>()()) std::__invoke[abi:v15000]<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>(DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&) @ 0x000000002027b275 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:09.171127 [ 20762 ] {} <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>()()) std::__invoke[abi:v15000]<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>(DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&) @ 0x000000002027b275 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:09.568871 [ 20761 ] {} <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:470: bool std::__invoke_void_return_wrapper<bool, false>::__call<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>(DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&) @ 0x000000002027b255 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:09.774541 [ 20762 ] {} <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:470: bool std::__invoke_void_return_wrapper<bool, false>::__call<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>(DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&) @ 0x000000002027b255 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:09.825387 [ 20760 ] {} <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:470: bool std::__invoke_void_return_wrapper<bool, false>::__call<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&>(DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'()&) @ 0x000000002027b255 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:10.483803 [ 20762 ] {} <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'(), bool ()>::operator()[abi:v15000]() @ 0x000000002027b235 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:10.663880 [ 20761 ] {} <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'(), bool ()>::operator()[abi:v15000]() @ 0x000000002027b235 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:10.991315 [ 20760 ] {} <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'(), bool ()>::operator()[abi:v15000]() @ 0x000000002027b235 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.572420 [ 20762 ] {} <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: bool std::__function::__policy_invoker<bool ()>::__call_impl<std::__function::__default_alloc_func<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'(), bool ()>>(std::__function::__policy_storage const*) @ 0x000000002027b1fd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.653083 [ 20761 ] {} <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: bool std::__function::__policy_invoker<bool ()>::__call_impl<std::__function::__default_alloc_func<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'(), bool ()>>(std::__function::__policy_storage const*) @ 0x000000002027b1fd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.660455 [ 20760 ] {} <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: bool std::__function::__policy_invoker<bool ()>::__call_impl<std::__function::__default_alloc_func<DB::MergeTask::ExecuteAndFinalizeHorizontalPart::subtasks::'lambda0'(), bool ()>>(std::__function::__policy_storage const*) @ 0x000000002027b1fd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.772065 [ 20762 ] {} <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<bool ()>::operator()[abi:v15000]() const @ 0x0000000014cbf556 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.856782 [ 20761 ] {} <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<bool ()>::operator()[abi:v15000]() const @ 0x0000000014cbf556 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.858193 [ 20760 ] {} <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<bool ()>::operator()[abi:v15000]() const @ 0x0000000014cbf556 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:11.970132 [ 20762 ] {} <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<bool ()>::operator()() const @ 0x0000000014cbe5b5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:12.056809 [ 20761 ] {} <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<bool ()>::operator()() const @ 0x0000000014cbe5b5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:12.057244 [ 20760 ] {} <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<bool ()>::operator()() const @ 0x0000000014cbe5b5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:12.541294 [ 20762 ] {} <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:433: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x00000000202624c6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:12.781691 [ 20761 ] {} <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:433: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x00000000202624c6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:12.834319 [ 20760 ] {} <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:433: DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x00000000202624c6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:13.528887 [ 20762 ] {} <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:854: DB::MergeTask::execute() @ 0x0000000020266d63 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:13.671968 [ 20760 ] {} <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:854: DB::MergeTask::execute() @ 0x0000000020266d63 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:13.802129 [ 20762 ] {} <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeFromLogEntryTask.h:36: DB::MergeFromLogEntryTask::executeInnerTask() @ 0x0000000020253124 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:13.809583 [ 20761 ] {} <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTask.cpp:854: DB::MergeTask::execute() @ 0x0000000020266d63 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:13.877148 [ 20760 ] {} <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeFromLogEntryTask.h:36: DB::MergeFromLogEntryTask::executeInnerTask() @ 0x0000000020253124 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:13.992050 [ 20762 ] {} <Fatal> BaseDaemon: 34. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:190: DB::ReplicatedMergeMutateTaskBase::executeImpl() @ 0x000000002072e007 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.008959 [ 20760 ] {} <Fatal> BaseDaemon: 34. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:190: DB::ReplicatedMergeMutateTaskBase::executeImpl() @ 0x000000002072e007 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.074426 [ 20761 ] {} <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeFromLogEntryTask.h:36: DB::MergeFromLogEntryTask::executeInnerTask() @ 0x0000000020253124 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.183891 [ 20762 ] {} <Fatal> BaseDaemon: 35. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:46: DB::ReplicatedMergeMutateTaskBase::executeStep() @ 0x000000002072c7fc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.200508 [ 20760 ] {} <Fatal> BaseDaemon: 35. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:46: DB::ReplicatedMergeMutateTaskBase::executeStep() @ 0x000000002072c7fc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.271348 [ 20761 ] {} <Fatal> BaseDaemon: 34. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:190: DB::ReplicatedMergeMutateTaskBase::executeImpl() @ 0x000000002072e007 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.311858 [ 20762 ] {} <Fatal> BaseDaemon: 36. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:280: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::routine(std::shared_ptr<DB::TaskRuntimeData>) @ 0x000000002029beda in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.331675 [ 20760 ] {} <Fatal> BaseDaemon: 36. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:280: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::routine(std::shared_ptr<DB::TaskRuntimeData>) @ 0x000000002029beda in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.446169 [ 20762 ] {} <Fatal> BaseDaemon: 37. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:346: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x000000002029cb37 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.464792 [ 20760 ] {} <Fatal> BaseDaemon: 37. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:346: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x000000002029cb37 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.465100 [ 20761 ] {} <Fatal> BaseDaemon: 35. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeMutateTaskBase.cpp:46: DB::ReplicatedMergeMutateTaskBase::executeStep() @ 0x000000002072c7fc in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.597874 [ 20761 ] {} <Fatal> BaseDaemon: 36. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:280: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::routine(std::shared_ptr<DB::TaskRuntimeData>) @ 0x000000002029beda in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.611881 [ 20762 ] {} <Fatal> BaseDaemon: 38. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:56: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()::operator()() const @ 0x00000000202a5f18 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.637697 [ 20760 ] {} <Fatal> BaseDaemon: 38. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:56: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()::operator()() const @ 0x00000000202a5f18 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.728933 [ 20761 ] {} <Fatal> BaseDaemon: 37. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:346: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x000000002029cb37 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.791101 [ 20762 ] {} <Fatal> BaseDaemon: 39. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>()()) std::__invoke[abi:v15000]<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>(DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&) @ 0x00000000202a5ef5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.817706 [ 20760 ] {} <Fatal> BaseDaemon: 39. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>()()) std::__invoke[abi:v15000]<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>(DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&) @ 0x00000000202a5ef5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.900963 [ 20761 ] {} <Fatal> BaseDaemon: 38. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp:56: DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()::operator()() const @ 0x00000000202a5f18 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.962647 [ 20762 ] {} <Fatal> BaseDaemon: 40. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>(DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&) @ 0x00000000202a5ed5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:14.989927 [ 20760 ] {} <Fatal> BaseDaemon: 40. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>(DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&) @ 0x00000000202a5ed5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.077594 [ 20761 ] {} <Fatal> BaseDaemon: 39. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>()()) std::__invoke[abi:v15000]<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>(DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&) @ 0x00000000202a5ef5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.133274 [ 20762 ] {} <Fatal> BaseDaemon: 41. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'(), void ()>::operator()[abi:v15000]() @ 0x00000000202a5eb5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.161304 [ 20760 ] {} <Fatal> BaseDaemon: 41. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'(), void ()>::operator()[abi:v15000]() @ 0x00000000202a5eb5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.247696 [ 20761 ] {} <Fatal> BaseDaemon: 40. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&>(DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'()&) @ 0x00000000202a5ed5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.304610 [ 20762 ] {} <Fatal> BaseDaemon: 42. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000202a5e7d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.333456 [ 20760 ] {} <Fatal> BaseDaemon: 42. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000202a5e7d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.421137 [ 20761 ] {} <Fatal> BaseDaemon: 41. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'(), void ()>::operator()[abi:v15000]() @ 0x00000000202a5eb5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.422520 [ 20762 ] {} <Fatal> BaseDaemon: 43. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x0000000014d835b6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.449761 [ 20760 ] {} <Fatal> BaseDaemon: 43. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x0000000014d835b6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.530512 [ 20762 ] {} <Fatal> BaseDaemon: 44. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x0000000014d82915 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.530735 [ 20762 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.\r\n2023.11.01 15:02:15.557124 [ 20760 ] {} <Fatal> BaseDaemon: 44. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x0000000014d82915 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.557472 [ 20760 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.\r\n2023.11.01 15:02:15.595884 [ 20761 ] {} <Fatal> BaseDaemon: 42. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::MergeTreeBackgroundExecutor(String, unsigned long, unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, std::basic_string_view<char, std::char_traits<char>>)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000202a5e7d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.718049 [ 20761 ] {} <Fatal> BaseDaemon: 43. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x0000000014d835b6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.830666 [ 20761 ] {} <Fatal> BaseDaemon: 44. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x0000000014d82915 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse\r\n2023.11.01 15:02:15.830922 [ 20761 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.\r\n\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/56202 | https://github.com/ClickHouse/ClickHouse/pull/59295 | ac906371702c2eac856dac304f59ef041e95d48f | 35ea956255e07fbe05bf05455a227f0b3a00b7a2 | "2023-11-01T14:06:10Z" | c++ | "2024-01-30T03:07:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,184 | ["src/Bridge/IBridge.cpp"] | clickhouse-odbc-bridge crash loop on client disconnect | **Describe what's wrong**
clickhouse-odbc-bridge enters the <Fatal> error loop when client disconnects after sending request and not reading response.
**How to reproduce**
when running
`clickhouse-odbc-bridge --http-port 9018`
run:
`for i in {1..10}; do echo -en "GET /ping HTTP/1.1\r\n\r\n" | nc -w 0 localhost 9018; done`
reproduction is time-sensitive, so requires multiple attempts to trigger the bug
**Does it reproduce on recent release?**
yes
**Error message and/or stacktrace**
```
2023.10.31 19:25:30.811290 [ 743459 ] {} <Trace> ODBCRequestHandlerFactory-factory: Request URI: /ping [857/4770]
2023.10.31 19:25:30.812435 [ 743458 ] {} <Trace> BaseDaemon: Received signal -1
2023.10.31 19:25:30.812492 [ 743458 ] {} <Fatal> BaseDaemon: (version 23.10.1.1, build id: CC54E5D90CE79FD49494C9B3704CBB1DF96B7973, git hash: 465962df7f87c241e2da20a3317d1
b028d0bbf07) (from thread 743459) Terminate called for uncaught exception: 2023.10.31 19:25:30.812509 [ 743458 ] {} <Fatal> BaseDaemon: Poco::Exception. Code: 1000, e.code() = 107, Net Exception: Socket is not connected, Stack trace (when copying
this message, always include the lines below):
0. Poco::Exception::Exception(String const&, int) @ 0x00000000097b72d9 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge 1. Poco::IOException::IOException(String const&, int) @ 0x00000000097bd489 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
2. Poco::Net::NetException::NetException(String const&, int) @ 0x00000000096e2fe9 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
3. Poco::Net::SocketImpl::error(int, String const&) @ 0x00000000096ecdd7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
4. Poco::Net::SocketImpl::peerAddress() @ 0x00000000096ef776 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
5. Poco::Net::HTTPServerSession::clientAddress() @ 0x00000000096d3ef1 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge 6. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x00000000069d3c90 in /home/ubuntu/
ClickHouse-master/build/programs/clickhouse-odbc-bridge
7. DB::HTTPServerConnection::run() @ 0x00000000069d30d2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
8. Poco::Net::TCPServerConnection::start() @ 0x00000000096f28c7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
9. Poco::Net::TCPServerDispatcher::run() @ 0x00000000096f2da5 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
10. Poco::PooledThread::run() @ 0x000000000980b107 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge 11. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000009808ca2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
12. ? @ 0x00007f2689494ac3 in ?
13. ? @ 0x00007f2689526a40 in ?
(version 23.10.1.1) 2023.10.31 19:25:30.812551 [ 743458 ] {} <Fatal> BaseDaemon:
0. Poco::Exception::Exception(String const&, int) @ 0x00000000097b72d9 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
1. Poco::IOException::IOException(String const&, int) @ 0x00000000097bd489 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
2. Poco::Net::NetException::NetException(String const&, int) @ 0x00000000096e2fe9 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
3. Poco::Net::SocketImpl::error(int, String const&) @ 0x00000000096ecdd7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge 4. Poco::Net::SocketImpl::peerAddress() @ 0x00000000096ef776 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
5. Poco::Net::HTTPServerSession::clientAddress() @ 0x00000000096d3ef1 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
6. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x00000000069d3c90 in /home/ubuntu/
ClickHouse-master/build/programs/clickhouse-odbc-bridge
7. DB::HTTPServerConnection::run() @ 0x00000000069d30d2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
8. Poco::Net::TCPServerConnection::start() @ 0x00000000096f28c7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge 9. Poco::Net::TCPServerDispatcher::run() @ 0x00000000096f2da5 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
10. Poco::PooledThread::run() @ 0x000000000980b107 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
11. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000009808ca2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
12. ? @ 0x00007f2689494ac3 in ?
13. ? @ 0x00007f2689526a40 in ?
(version 23.10.1.1)
2023.10.31 19:25:30.812586 [ 743458 ] {} <Fatal> BaseDaemon: 0. Poco::Exception::Exception(String const&, int) @ 0x00000000097b72d9 in /home/ubuntu/ClickHouse-master/build/
programs/clickhouse-odbc-bridge
1. Poco::IOException::IOException(String const&, int) @ 0x00000000097bd489 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
2. Poco::Net::NetException::NetException(String const&, int) @ 0x00000000096e2fe9 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
3. Poco::Net::SocketImpl::error(int, String const&) @ 0x00000000096ecdd7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
4. Poco::Net::SocketImpl::peerAddress() @ 0x00000000096ef776 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
5. Poco::Net::HTTPServerSession::clientAddress() @ 0x00000000096d3ef1 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
6. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x00000000069d3c90 in /home/ubuntu/
ClickHouse-master/build/programs/clickhouse-odbc-bridge
7. DB::HTTPServerConnection::run() @ 0x00000000069d30d2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
8. Poco::Net::TCPServerConnection::start() @ 0x00000000096f28c7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
9. Poco::Net::TCPServerDispatcher::run() @ 0x00000000096f2da5 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
10. Poco::PooledThread::run() @ 0x000000000980b107 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
11. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000009808ca2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
12. ? @ 0x00007f2689494ac3 in ?
13. ? @ 0x00007f2689526a40 in ?
(version 23.10.1.1)
2023.10.31 19:25:30.812618 [ 743458 ] {} <Fatal> BaseDaemon: 1. Poco::IOException::IOException(String const&, int) @ 0x00000000097bd489 in /home/ubuntu/ClickHouse-master/bu
ild/programs/clickhouse-odbc-bridge
2. Poco::Net::NetException::NetException(String const&, int) @ 0x00000000096e2fe9 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
3. Poco::Net::SocketImpl::error(int, String const&) @ 0x00000000096ecdd7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
4. Poco::Net::SocketImpl::peerAddress() @ 0x00000000096ef776 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
5. Poco::Net::HTTPServerSession::clientAddress() @ 0x00000000096d3ef1 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
6. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x00000000069d3c90 in /home/ubuntu/
ClickHouse-master/build/programs/clickhouse-odbc-bridge
7. DB::HTTPServerConnection::run() @ 0x00000000069d30d2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
8. Poco::Net::TCPServerConnection::start() @ 0x00000000096f28c7 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
9. Poco::Net::TCPServerDispatcher::run() @ 0x00000000096f2da5 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
10. Poco::PooledThread::run() @ 0x000000000980b107 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
11. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000009808ca2 in /home/ubuntu/ClickHouse-master/build/programs/clickhouse-odbc-bridge
12. ? @ 0x00007f2689494ac3 in ?
13. ? @ 0x00007f2689526a40 in ?
(version 23.10.1.1)
...and much more in loop..
```
**Additional context**
Seems exception handler is not set in the odbc-bridge | https://github.com/ClickHouse/ClickHouse/issues/56184 | https://github.com/ClickHouse/ClickHouse/pull/56185 | c01b848ef841698bc7dd78a87f72de9ab45edcce | 2ef10e82e6becbfc6739461124c99a549bf1ef2c | "2023-11-01T05:11:34Z" | c++ | "2023-11-01T16:36:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,180 | ["docker/test/integration/mysql_java_client/MySQLJavaClientTest.java", "src/Core/MySQL/MySQLUtils.cpp", "src/Core/MySQL/MySQLUtils.h", "src/Core/MySQL/PacketsProtocolBinary.cpp", "src/Core/MySQL/PacketsProtocolText.cpp", "tests/integration/test_mysql_protocol/java_client.reference", "tests/integration/test_mysql_protocol/java_client_binary.reference", "tests/integration/test_mysql_protocol/java_client_test.sql", "tests/integration/test_mysql_protocol/java_client_text.reference", "tests/integration/test_mysql_protocol/test.py"] | Tableau converts null in nullable numbers and dates to "0" or "01/01/1970" when using the MySQL Clickhouse connector | When using Tableau, and connecting to Clickhouse through the MySQL connector, we noticed that it doesn't handle `Nullable` fields very well.
When running this SQL in Tableau:
```sql
SELECT NULL AS number_nullable, NULL AS string_nullable, NULL AS date_nullable
UNION ALL
SELECT 123 AS number_nullable, 'abc' AS string_nullable, CURDATE() AS date_nullable;
```
To a regular MySQL instance (all good here!):
<p align="center"><img width="500" src="https://github.com/ClickHouse/ClickHouse/assets/10865165/5eef32a8-dc47-4b51-a499-e65d04c6d45e" /></p>
When running it to the MySQL Clickhouse instance:
- the `NULL` value in `date_nullable` becomes `01/01/1970`
- `NULL` value in `number_nullable` becomes `0`
<p align="center"><img width="500" src="https://github.com/ClickHouse/ClickHouse/assets/10865165/102a1025-987a-4aa1-88a4-7f12ffffe7f7" /> </p>
I am not able to reproduce this using other MySQL clients, only Tableau. When connecting with `mysql` in my terminal:
To a regular MySQL instance (all good):
```
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 21
Server version: 8.1.0 Homebrew
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> SELECT NULL AS number_nullable, NULL AS string_nullable, NULL AS date_nullable
-> UNION ALL
-> SELECT 123 AS number_nullable, 'abc' AS string_nullable, CURDATE() AS date_nullable;
+-----------------+-----------------+---------------+
| number_nullable | string_nullable | date_nullable |
+-----------------+-----------------+---------------+
| NULL | NULL | NULL |
| 123 | abc | 2023-10-31 |
+-----------------+-----------------+---------------+
2 rows in set (0.01 sec)
```
To the MySQL Clickhouse instance (also all good!):
```
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 23
Server version: 23.10.1.1795-ClickHouse
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> SELECT NULL AS number_nullable, NULL AS string_nullable, NULL AS date_nullable
-> UNION ALL
-> SELECT 123 AS number_nullable, 'abc' AS string_nullable, CURDATE() AS date_nullable;
+-----------------+-----------------+---------------+
| number_nullable | string_nullable | date_nullable |
+-----------------+-----------------+---------------+
| 123 | abc | 2023-10-31 |
| NULL | NULL | NULL |
+-----------------+-----------------+---------------+
2 rows in set (0.10 sec)
Read 2 rows, 2.00 B in 0.001288 sec., 1552 rows/sec., 1.52 KiB/sec.
```
**Describe what's wrong**
Tableau converts null in a nullable number or date field to 0 instead of showing it as `NULL`. Strings work as expected.
Is there anything we can do on the Clickhouse side to make Tableau understand the schema better?
This might very well be a Tableau bug, but hopefully there is a way to fix it from the Clickhouse side.
**How to reproduce**
* Which ClickHouse server version to use: latest master
* Which interface to use: MySQL
**Expected behavior**
The MySQL Clickhouse connector should behave the same way as a regular MySQL instance in tableau when it comes to parsing `nullable` values | https://github.com/ClickHouse/ClickHouse/issues/56180 | https://github.com/ClickHouse/ClickHouse/pull/56799 | 3067ca64df756c9c469bfc71a53686c213a82351 | 44bd0598a3c128646974bc114dac2c44797159d7 | "2023-10-31T19:47:25Z" | c++ | "2023-11-16T00:12:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,119 | ["src/Storages/MergeTree/MutateTask.cpp", "tests/queries/0_stateless/02008_materialize_column.sql", "tests/queries/0_stateless/02946_materialize_column_must_not_override_past_values.reference", "tests/queries/0_stateless/02946_materialize_column_must_not_override_past_values.sql"] | Materialize column overwrites all past value. It should be documented. | https://clickhouse.com/docs/en/sql-reference/statements/alter/column#materialize-column
It does not say an extremely important thing β column materialization will overwrite all past values in the column. | https://github.com/ClickHouse/ClickHouse/issues/56119 | https://github.com/ClickHouse/ClickHouse/pull/58023 | d3fb04250562784c0cd387e658c30479979cc292 | 7e11fc79d9f5156f48cc090f969f983ae507add8 | "2023-10-30T08:54:24Z" | c++ | "2024-02-20T10:12:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,108 | ["tests/ci/worker/dockerhub_proxy_template.sh"] | Integration tests fails due to docker-compose pull timeout | Sometimes, 900 seconds is not enough for docker to pull the images, so:
- maybe there are some problems with http://dockerhub-proxy.dockerhub-proxy-zone:5000/?
- or just with CI infrastructure?
- or maybe it worth to simply increase this timeout? and enable `--debug` mode for `dockerd` (maybe it will have more logs, like on retries on something), but I doubt that this is a good idea, since otherwise the tests will takes even longer.
@Felixoid what do you think?
Examples:
- https://s3.amazonaws.com/clickhouse-test-reports/56030/331f661322ee0b12ec41cec8cba36b9973a6aa5a/integration_tests__asan__analyzer__[1_6].html
- https://s3.amazonaws.com/clickhouse-test-reports/56030/331f661322ee0b12ec41cec8cba36b9973a6aa5a/integration_tests__asan__[6_6].html
- https://s3.amazonaws.com/clickhouse-test-reports/56030/331f661322ee0b12ec41cec8cba36b9973a6aa5a/integration_tests__asan__analyzer__[6_6].html
- https://s3.amazonaws.com/clickhouse-test-reports/56030/331f661322ee0b12ec41cec8cba36b9973a6aa5a/integration_tests_flaky_check__asan_.html | https://github.com/ClickHouse/ClickHouse/issues/56108 | https://github.com/ClickHouse/ClickHouse/pull/57744 | ef8068ed03c09253ad31b3dec5033713be2cd291 | 7bd6b42af23a6a11b10d5a6b0c2ffb38ee68a142 | "2023-10-29T16:44:29Z" | c++ | "2023-12-11T13:25:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,107 | ["src/Columns/ColumnDecimal.cpp", "tests/ci/build_check.py", "utils/prepare-time-trace/prepare-time-trace.sh"] | Send information about compiled sizes of translation units and all files to the CI database | Depends on #53610. | https://github.com/ClickHouse/ClickHouse/issues/56107 | https://github.com/ClickHouse/ClickHouse/pull/56636 | 00b414f87da6b9b793832c759ad8757aa339d199 | b1d8c98cfdd8cfdf04196c95fba80fd76f3ece24 | "2023-10-29T14:59:39Z" | c++ | "2023-11-13T05:52:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 56,031 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/MutationsInterpreter.cpp", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/TreeRewriter.h", "tests/queries/0_stateless/02902_add_scalar_in_all_case.reference", "tests/queries/0_stateless/02902_add_scalar_in_all_case.sql"] | Scalar doesn't exist for `format` function | ```sql
SELECT *
FROM format(TSVRaw, (
SELECT
'123',
'456'
))
FORMAT TSVRaw
Query id: c5e0b9f8-d055-4b5d-8e14-b695f8096ab0
0 rows in set. Elapsed: 0.002 sec.
Received exception:
Code: 36. DB::Exception: Scalar `2451974040954057853_15236037560691200747` doesn't exist (internal bug): While processing __getScalar('2451974040954057853_15236037560691200747') AS constant_expression. (BAD_ARGUMENTS)
``` | https://github.com/ClickHouse/ClickHouse/issues/56031 | https://github.com/ClickHouse/ClickHouse/pull/56057 | b042e2d98621303491f7ffc238deed5855f9d2a8 | cb0cf67d6766ee3ad7b876c6a068ea3a44acb6cc | "2023-10-26T02:06:35Z" | c++ | "2023-10-30T11:31:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,998 | ["programs/server/Server.cpp"] | OSIOWaitMicroseconds is not available at systems with the new kernels | ```
SELECT
(arrayJoin(ProfileEvents) AS e).1 AS name,
sum(e.2) AS value
FROM system.query_log
WHERE (event_date >= (today() - 100)) AND (name LIKE 'OS%')
GROUP BY name
ORDER BY name ASC
ββnameββββββββββββββββββββββββββ¬ββββββββvalueββ
β OSCPUVirtualTimeMicroseconds β 4504276982 β
β OSCPUWaitMicroseconds β 179718688 β
β OSReadBytes β 104423424 β
β OSReadChars β 6549690368 β
β OSWriteBytes β 597164412928 β
β OSWriteChars β 602138515456 β
ββββββββββββββββββββββββββββββββ΄βββββββββββββββ
```
```
uname -a
Linux clickhouse 5.15.0-1021-oracle #27~20.04.1-Ubuntu SMP Mon Oct 17 01:56:24 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
ClickHouse server version 23.9.1
```
```
grep CAP_NET_ADMIN /lib/systemd/system/clickhouse-server.service
CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE CAP_NET_BIND_SERVICE
```
```
getcap /usr/bin/clickhouse
/usr/bin/clickhouse = cap_net_bind_service,cap_net_admin,cap_ipc_lock,cap_sys_nice+ep
``` | https://github.com/ClickHouse/ClickHouse/issues/55998 | https://github.com/ClickHouse/ClickHouse/pull/56227 | 39163b4e314ae51aefd4fc25850ca930420e7873 | cea78cd093ac1ebc6feb7533483c6cd867078ae9 | "2023-10-24T22:44:32Z" | c++ | "2023-11-02T12:21:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,942 | ["src/Functions/FunctionsComparison.h", "src/Interpreters/InterpreterShowIndexesQuery.cpp", "tests/queries/0_stateless/02906_interval_comparison.reference", "tests/queries/0_stateless/02906_interval_comparison.sql"] | interval comparison apparently ignore units | Comparison of intervals appear to ignore units, and just compare the numeric values.
Here's what happens (clearly 1 microsecond is not equal to 1 second, and similarly 100us is not greater than 1 second).
```
df3efe7565e7 :) select toIntervalMicrosecond(1) = toIntervalSecond(1)
SELECT toIntervalMicrosecond(1) = toIntervalSecond(1)
Query id: 49f6631d-92c2-42c8-9b2f-99c40b3fae8f
ββequals(toIntervalMicrosecond(1), toIntervalSecond(1))ββ
β 1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.004 sec.
df3efe7565e7 :) select toIntervalMicrosecond(100) > toIntervalSecond(1)
SELECT toIntervalMicrosecond(100) > toIntervalSecond(1)
Query id: a4ccf74f-e04c-4ef2-b6c8-6c22002b48aa
ββgreater(toIntervalMicrosecond(100), toIntervalSecond(1))ββ
β 1 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
df3efe7565e7 :) select version()
SELECT version()
Query id: 17b10304-ea34-4f8e-9223-788c7d4a4df6
ββversion()ββββ
β 23.8.1.2992 β
βββββββββββββββ
1 row in set. Elapsed: 0.003 sec.
```
https://fiddle.clickhouse.com/19443d43-8d8d-4d27-adf9-33a02ec1d6cf | https://github.com/ClickHouse/ClickHouse/issues/55942 | https://github.com/ClickHouse/ClickHouse/pull/56090 | 718a7faa7c36093b0060ecc8d050cdd6ee220a8c | 744b1db0846865fd3eed3cb187018bbf5f300082 | "2023-10-23T12:49:48Z" | c++ | "2023-10-28T12:56:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,875 | ["src/Common/MemoryTracker.cpp", "src/Common/MemoryTracker.h", "tests/queries/0_stateless/02896_memory_accounting_for_user.reference", "tests/queries/0_stateless/02896_memory_accounting_for_user.sh"] | user memory tracking - wrong numbers | That bash script does async inserts in loop:
```bash
#!/bin/bash
clickhouse-client --query='drop table if exists test_inserts'
clickhouse-client --query='create table test_inserts engine=Null AS system.numbers'
total_iterations=40000
iterations_in_parallel=80
for ((i = 1; i <= $total_iterations; i++)); do
while true; do
running_jobs=$(jobs -p | wc -w)
if [ $running_jobs -lt $iterations_in_parallel ]; then
break
fi
sleep 0.01
done
(
(
clickhouse-local --query='SELECT * FROM numbers(1000000)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20test_inserts%20FORMAT%20TSV&async_insert=1' --data-binary @-
# echo "INSERT $i completed"
clickhouse-client --query="WITH (SELECT formatReadableSize(memory_usage) FROM system.user_processes WHERE user='default') as user_mem, (SELECT formatReadableSize(value) FROM system.metrics WHERE metric='MemoryTracking') AS system_mem SELECT 'User: ' || user_mem || ', system: ' || system_mem"
) &)
done
wait
```
It also prints the user memory tracker and system memory tracker.
Run it and keep it running for several minutes. After a while the user memory tracker will start showing numbers much bigger than the system memory tracker:
```
User: 6.45 GiB, system: 1.12 GiB
```
And if you use `max_memory_usage_for_user` it will start returning memory exceptions despite the that that there is plenty of RAM.
Once you stop doing async inserts the user memory tracker gets back to normal. | https://github.com/ClickHouse/ClickHouse/issues/55875 | https://github.com/ClickHouse/ClickHouse/pull/56089 | 76e6b75ae22c186ebe08f039b546c766a5df6902 | 3069db71f1cc4548ab1f1baab8fa437717334199 | "2023-10-20T15:44:30Z" | c++ | "2023-11-02T16:14:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,843 | ["src/Processors/Transforms/WindowTransform.cpp", "tests/queries/0_stateless/02900_window_function_with_sparse_column.reference", "tests/queries/0_stateless/02900_window_function_with_sparse_column.sql"] | Window function returns wrong result on sparse column | last_value produces wrong data in some cases
Reproduced on 23.7 and 23.9, not on 23.6 and 23.5
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
**How to reproduce**
create database test on cluster default engine Atomic
CREATE TABLE test.test on cluster default
(
id String,
time DateTime64(9),
key Int64,
value Bool,
)
ENGINE = ReplicatedMergeTree
PARTITION BY toYYYYMM(time)
ORDER BY (key, id, time)
insert into test.test values ('id0', now(), 3, false)
The following query returns true which is wrong.
SELECT
last_value(value) OVER (PARTITION BY id ORDER BY time ASC) as last_value
FROM test.test
WHERE (key = 3)
The right result is returned if 'PARTITION BY id' or WHERE clause is removed
SELECT
last_value(value) OVER (ORDER BY time ASC) as last_value
FROM test.test
WHERE (key = 3)
SELECT
last_value(value) OVER (PARTITION BY id ORDER BY time ASC) as last_value
FROM test.test
| https://github.com/ClickHouse/ClickHouse/issues/55843 | https://github.com/ClickHouse/ClickHouse/pull/55895 | ab3f9bcacfd9444c765c2986d1c34b64dfe0e029 | 1be4ff229be998787da5bc734056e5fcde06118c | "2023-10-19T17:06:41Z" | c++ | "2023-10-21T22:02:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,834 | ["src/Client/LocalConnection.cpp", "tests/queries/0_stateless/02900_clickhouse_local_drop_current_database.reference", "tests/queries/0_stateless/02900_clickhouse_local_drop_current_database.sh"] | Deleting database that I'm currently in | Using ClickHouse Local, if I create a database:
```
$ clickhouse local -m --path archives.chdb
Processing configuration file 'config.xml'.
ClickHouse local version 23.9.1.1854 (official build).
:) create database foo;
CREATE DATABASE foo
Query id: b6ccc456-0334-4ce1-a644-9d6cfeac09b8
Ok.
0 rows in set. Elapsed: 0.002 sec.
:) use foo;
USE foo
Query id: 3ab73c2b-4c81-4c56-9717-1f3c19ffbd3b
Ok.
0 rows in set. Elapsed: 0.000 sec.
```
And then drop it while it's my current database:
```
:) drop database foo;
DROP DATABASE foo
Query id: b5ec1e84-1120-44ef-989f-fe9228a8218b
Ok.
0 rows in set. Elapsed: 0.001 sec.
```
I'm now a bit stuck!
```
:) select 1;
SELECT 1
Query id: 5fd07bab-ef65-4cc1-93b9-a797d7f7154e
Exception on client:
Code: 81. DB::Exception: Database foo does not exist. (UNKNOWN_DATABASE)
:) use _local;
USE _local
Query id: 77296f9e-6e8e-40c7-a06d-479d3c88ea8d
Exception on client:
Code: 81. DB::Exception: Database foo does not exist. (UNKNOWN_DATABASE)
```
I think maybe it shouldn't let me delete a database if I'm using that one, otherwise there's no escape?
| https://github.com/ClickHouse/ClickHouse/issues/55834 | https://github.com/ClickHouse/ClickHouse/pull/55853 | 5819bcd07a1ed424bc33f81c5f2b9145ca059514 | 14c29001856f1d1a498faf5e08e5e1ca08c7be79 | "2023-10-19T13:12:05Z" | c++ | "2023-10-20T17:04:54Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.