status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 26,019 | ["dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output_release-management_generate-constraints.svg", "scripts/in_container/_in_container_script_init.sh", "scripts/in_container/_in_container_utils.sh", "scripts/in_container/in_container_utils.py", "scripts/in_container/install_airflow_and_providers.py", "scripts/in_container/run_generate_constraints.py", "scripts/in_container/run_generate_constraints.sh", "scripts/in_container/run_system_tests.sh"] | Rewrite the in-container scripts in Python | We have a number of "in_container" scripts written in Bash, They are doing a number of houseekeeping stuff but since we already have Python 3.7+ inside the CI image, we could modularise them more and make them run from external and simplify entrypoint_ci (for example separate script for tests). | https://github.com/apache/airflow/issues/26019 | https://github.com/apache/airflow/pull/36158 | 36010f6d0e3231081dbae095baff5a5b5c5b34eb | f39cdcceff4fa64debcaaef6e30f345b7b21696e | "2022-08-28T09:23:08Z" | python | "2023-12-11T07:02:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 26,013 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | schedule_interval is not respecting the value assigned to that either it's one day or none | ### Apache Airflow version
main (development)
### What happened
Schedule_interval is `none` even if `timedelta(days=365, hours=6)` also its 1 day for `schedule_interval=None` and `schedule_interval=timedelta(days=3)`
### What you think should happen instead
It should respect the value assigned to it.
### How to reproduce
Create a dag with `schedule_interval=None` or `schedule_interval=timedelta(days=5)` and observe the behaviour.

**DAG-**
```
with DAG(
dag_id="branch_python_operator",
start_date=days_ago(1),
schedule_interval="* * * * *",
doc_md=docs,
tags=['core']
) as dag:
```
**DB Results-**
```
postgres=# select schedule_interval from dag where dag_id='branch_python_operator';
schedule_interval
------------------------------------------------------------------------------
{"type": "timedelta", "attrs": {"days": 1, "seconds": 0, "microseconds": 0}}
(1 row)
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/26013 | https://github.com/apache/airflow/pull/26082 | d4db9aecc3e534630c76e59c54d90329ed20a6ab | c982080ca1c824dd26c452bcb420df0f3da1afa8 | "2022-08-27T16:35:56Z" | python | "2022-08-31T09:09:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,976 | ["airflow/api_connexion/schemas/pool_schema.py", "airflow/models/pool.py", "airflow/www/views.py", "tests/api_connexion/endpoints/test_pool_endpoint.py", "tests/api_connexion/schemas/test_pool_schemas.py", "tests/api_connexion/test_auth.py", "tests/www/views/test_views_pool.py"] | Include "Scheduled slots" column in Pools view | ### Description
It would be nice to have a "Scheduled slots" column to see how many slots want to enter each pool. Currently we are only displaying the running and queued slots.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25976 | https://github.com/apache/airflow/pull/26006 | 1c73304bdf26b19d573902bcdfefc8ca5160511c | bcdc25dd3fbda568b5ff2c04701623d6bf11a61f | "2022-08-26T07:53:27Z" | python | "2022-08-29T06:31:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,952 | ["airflow/providers/amazon/aws/operators/rds.py", "airflow/providers/amazon/aws/sensors/rds.py", "docs/apache-airflow-providers-amazon/operators/rds.rst", "tests/providers/amazon/aws/operators/test_rds.py", "tests/system/providers/amazon/aws/example_rds_instance.py"] | Add RDS operators/sensors | ### Description
I think adding the following operators/sensors would benefit companies that need to start/stop RDS instances programmatically.
Name | Description | PR
:- | :- | :-
`RdsStartDbOperator` | Start an instance, and optionally wait for it enter "available" state | #27076
`RdsStopDbOperator` | Start an instance, and optionally wait for it to enter "stopped" state | #27076
`RdsDbSensor` | Wait for the requested status (eg. available, stopped) | #26003
Is this something that would be accepted into the codebase?
Please let me know.
### Use case/motivation
#### 1. Saving money
RDS is expensive. To save money, a company keeps test/dev environment relational databases shutdown until it needs to use them. With Airflow, they can start a database instance before running a workload, then turn it off after the workload finishes (or errors).
#### 2. Force RDS to stay shutdown
RDS automatically starts a database after 1 week of downtime. A company does not need this feature. They can create a DAG to continuously run the shutdown command on a list of databases instance ids stored in a `Variable`. The alternative is to create a shell script or login to the console and manually shutdown each database every week.
#### 3. Making sure a database is running before scheduling workload
A company programmatically starts/stops its RDS instances. Before they run a workload, they want to make sure it's running. They can use a sensor to make sure a database is available before attempting to run any jobs that require access.
Also, during maintenance windows, RDS instances may be taken offline. Rather than tuning each DAG schedule to run outside of this window, a company can use a sensor to wait until the instance is available. (Yes, the availability check could also take place immediately before the maintenance window.)
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25952 | https://github.com/apache/airflow/pull/27076 | d4bfccb3c90d889863bb1d1500ad3158fc833aae | a2413cf6ca8b93e491a48af11d769cd13bce8884 | "2022-08-25T08:51:53Z" | python | "2022-10-19T05:36:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,949 | ["airflow/www/static/js/api/useGridData.test.js", "airflow/www/static/js/api/useGridData.ts"] | Auto-refresh is broken in 2.3.4 | ### Apache Airflow version
2.3.4
### What happened
In PR #25042 a bug was introduced that prevents auto-refresh from working when tasks of type `scheduled` are running.
### What you think should happen instead
Auto-refresh should work for any running or queued task, rather than only manually-scheduled tasks.
### How to reproduce
_No response_
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25949 | https://github.com/apache/airflow/pull/25950 | e996a88c7b19a1d30c529f5dd126d0a8871f5ce0 | 37ec752c818d4c42cba6e7fdb2e11cddc198e810 | "2022-08-25T03:39:42Z" | python | "2022-08-25T11:46:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,937 | ["airflow/providers/common/sql/hooks/sql.py", "airflow/providers/common/sql/provider.yaml", "airflow/providers/presto/hooks/presto.py", "airflow/providers/presto/provider.yaml", "airflow/providers/sqlite/hooks/sqlite.py", "airflow/providers/sqlite/provider.yaml", "airflow/providers/trino/hooks/trino.py", "airflow/providers/trino/provider.yaml", "generated/provider_dependencies.json"] | TrinoHook uses wrong parameter representation when inserting rows | ### Apache Airflow Provider(s)
trino
### Versions of Apache Airflow Providers
apache-airflow-providers-trino==4.0.0
### Apache Airflow version
2.3.3
### Operating System
macOS 12.5.1 (21G83)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
`TrinoHook.insert_rows()` throws a syntax error due to the underlying prepared statement using "%s" as representation for parameters, instead of "?" [which Trino uses](https://trino.io/docs/current/sql/prepare.html#description).
### What you think should happen instead
`TrinoHook.insert_rows()` should insert rows using Trino-compatible SQL statements.
The following exception is raised currently:
`trino.exceptions.TrinoUserError: TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:88: mismatched input '%'. Expecting: ')', <expression>, <query>", query_id=xxx)`
### How to reproduce
Instantiate an `airflow.providers.trino.hooks.trino.TrinoHook` instance and use it's `insert_rows()` method.
Operators using this method internally are also broken: e.g. `airflow.providers.trino.transfers.gcs_to_trino.GCSToTrinoOperator`
### Anything else
The issue seems to come from `TrinoHook.insert_rows()` relying on `DbApiHook.insert_rows()`, which uses "%s" to represent query parameters.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25937 | https://github.com/apache/airflow/pull/25939 | 4c3fb1ff2b789320cc2f19bd921ac0335fc8fdf1 | a74d9349919b340638f0db01bc3abb86f71c6093 | "2022-08-24T14:02:00Z" | python | "2022-08-27T01:15:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,926 | ["docs/apache-airflow-providers-docker/decorators/docker.rst", "docs/apache-airflow-providers-docker/index.rst"] | How to guide for @task.docker decorator | ### Body
Hi.
[The documentation for apache-airflow-providers-docker](https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html) does not provide information on how to use the `@task.dockker `decorator. We have this decorator described only in the API reference for this provider and documentation for the apache airflow package.
Best regards,
Kamil
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/25926 | https://github.com/apache/airflow/pull/28251 | fd5846d256b6d269b160deb8df67cd3d914188e0 | 74b69030efbb87e44c411b3563989d722fa20336 | "2022-08-24T04:39:14Z" | python | "2022-12-14T08:48:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,836 | ["airflow/api/client/api_client.py", "airflow/api/client/json_client.py", "airflow/api/client/local_client.py", "airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | Support overriding `replace_microseconds` parameter for `airflow dags trigger` CLI command | ### Description
`airflow dags trigger` CLI command always defaults with `replace_microseconds=True` because of the default value in the API. It would be very nice to be able to control this flag from the CLI.
### Use case/motivation
We use AWS MWAA. The exposed interface is Airflow CLI (yes, we could also ask to get a different interface from AWS MWAA, but I think this is something that was just overlooked for the CLI?), which does not support overriding `replace_microseconds` parameter when calling `airflow dags trigger` CLI command.
For the most part, our dag runs for a given dag do not happen remotely at the same time. However, based on user behavior, they are sometimes triggered within the same second (albeit not microsecond). The first dag run is successfully triggered, but the second dag run fails the `replace_microseconds` parameter is wiping out the microseconds that we pass. Thus, DagRun.find_duplicates returns True for the second dag run that we're trying to trigger, and this raises the `DagRunAlreadyExists` exception.
### Related issues
Not quite - they all seem to be around the experimental api and not directly related to the CLI.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25836 | https://github.com/apache/airflow/pull/27640 | c30c0b5714e4ee217735649b9405f0f79af63059 | b6013c0b8f1064c523af2d905c3f32ff1cbec421 | "2022-08-19T17:04:24Z" | python | "2022-11-26T00:07:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,781 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | BigQueryGetDataOperator does not support passing project_id | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.3.0
### Apache Airflow version
2.3.2
### Operating System
MacOS
### Deployment
Other
### Deployment details
_No response_
### What happened
Can not actively pass project_id as an argument when using `BigQueryGetDataOperator`. This operator internally fallbacks into `default` project id.
### What you think should happen instead
Should let developers pass project_id when needed
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25781 | https://github.com/apache/airflow/pull/25782 | 98a7701942c683f3126f9c4f450c352b510a2734 | fc6dfa338a76d02a426e2b7f0325d37ea5e95ac3 | "2022-08-18T04:40:01Z" | python | "2022-08-20T21:14:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,775 | ["airflow/models/abstractoperator.py", "airflow/models/taskmixin.py", "tests/models/test_baseoperator.py"] | XComs from another task group fail to populate dynamic task mapping metadata | ### Apache Airflow version
2.3.3
### What happened
When a task returns a mappable Xcom within a task group, the dynamic task mapping feature (via `.expand`) causes the Airflow Scheduler to infinitely loop with a runtime error:
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 751, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 839, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 921, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1163, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 524, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 654, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 710, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 614, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 600, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'x'
```
### What you think should happen instead
Xcoms from different task groups should be mappable within other group scopes.
### How to reproduce
```
from airflow import DAG
from airflow.decorators import task
from airflow.utils.task_group import TaskGroup
import pendulum
@task
def enumerate(x):
return [i for i in range(x)]
@task
def addOne(x):
return x+1
with DAG(
dag_id="TaskGroupMappingBug",
schedule_interval=None,
start_date=pendulum.now().subtract(days=1),
) as dag:
with TaskGroup(group_id="enumerateNine"):
y = enumerate(9)
with TaskGroup(group_id="add"):
# airflow scheduler throws error here so this is never reached
z = addOne.expand(x=y)
```
### Operating System
linux/amd64 via Docker (apache/airflow:2.3.3-python3.9)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
docker-compose version 1.29.2, build 5becea4c
Docker Engine v20.10.14
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25775 | https://github.com/apache/airflow/pull/25793 | 6e66dd7776707936345927f8fccee3ddb7f23a2b | 5c48ed19bd3b554f9c3e881a4d9eb61eeba4295b | "2022-08-17T18:42:22Z" | python | "2022-08-19T09:55:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,743 | ["airflow/config_templates/airflow_local_settings.py"] | DeprecationWarning: Passing filename_template to FileTaskHandler is deprecated and has no effect | ### Apache Airflow version
2.3.3
### What happened
After upgrading or installing airflow 2.3.3 the remote_logging in airflow.cfg cant be set to true without creating depreciation warning.
I'm using remote logging to an s3 bucket.
It doesn't matter which version of **apache-airflow-providers-amazon** i have installed.
When using systemd units to start the airflow components, the webserver will spam the depreciation warning every second.
Tested with Python 3.10 and 3.7.3
### What you think should happen instead
When using the remote logging
It should not execute an action every second in the background which seems to be deprecated.
### How to reproduce
You could quickly install an Python virtual Environment on a machine of you choice.
After that install airflow and apache-airflow-providers-amazon over pip
Then change the logging part in the airflow.cfg:
**[logging]
remote_logging = True**
create a testdag.py containing at least: **from airflow import DAG**
run it with Python to see the errors:
python testdag.py
hint: some more deprecationWarnings will appear because the standard airflow.cfg which get created when installing airflow is not the current state.
The Deprication warning you should see when turning remote_logging to true is:
`.../lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py:52 DeprecationWarning: Passing filename_template to FileTaskHandler is deprecated and has no effect`
### Operating System
Debian GNU/Linux 10 (buster) and also tested Fedora release 36 (Thirty Six)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 4.0.0
### Deployment
Virtualenv installation
### Deployment details
Running a small setup. 2 Virtual Machines. Airflow installed over pip inside a Python virtual environment.
### Anything else
The Problem occurs every dag run and it gets logged every second inside the journal produced by the webserver systemd unit.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25743 | https://github.com/apache/airflow/pull/25764 | 0267a47e5abd104891e0ec6c741b5bed208eef1e | da616a1421c71c8ec228fefe78a0a90263991423 | "2022-08-16T14:26:29Z" | python | "2022-08-19T14:13:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,718 | ["airflow/providers/google/cloud/hooks/bigquery_dts.py"] | Incorrect config name generated for BigQueryDeleteDataTransferConfigOperator | ### Apache Airflow version
2.3.3
### What happened
When we try to delete a big query transfer config using BigQueryDeleteDataTransferConfigOperator, we are unable to find the config, as the generated transfer config name is erroneous.
As a result, although a transfer config id (that exists) is passed to the operator, we get an error saying that the transfer config doesn't exist.
### What you think should happen instead
On further analysis, it was revealed that, in the bigquery_dts hook, the project name is incorrectly created as follows on the line 171:
`project = f"/{project}/locations/{self.location}"`
That is there's an extra / prefixed to the project.
Removing the extra / shall fix this bug.
### How to reproduce
1. Create a transfer config in the BQ data transfers/or use the operator BigQueryCreateDataTransferOperator (in a project located in Europe).
2. Try to delete the transfer config using the BigQueryDeleteDataTransferConfigOperator by passing the location of the project along with the transfer config id. This step will throw the error.
### Operating System
Windows 11
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25718 | https://github.com/apache/airflow/pull/25719 | c6e9cdb4d013fec330deb79810dbb735d2c01482 | fa0cb363b860b553af2ef9530ea2de706bd16e5d | "2022-08-15T03:02:59Z" | python | "2022-10-02T00:56:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,712 | ["airflow/providers/postgres/provider.yaml", "generated/provider_dependencies.json"] | postgres provider: use non-binary psycopg2 (recommended for production use) | ### Apache Airflow Provider(s)
postgres
### Versions of Apache Airflow Providers
apache-airflow-providers-postgres==5.0.0
### Apache Airflow version
2.3.3
### Operating System
Debian 11 (airflow docker image)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
psycopg2-binary package is installed.
### What you think should happen instead
psycopg (non-binary) package is installed.
According to the [psycopg2 docs](https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary), (emphasis theirs) "**For production use you are advised to use the source distribution.**".
### How to reproduce
Either
```
docker run -it apache/airflow:2.3.3-python3.10
pip freeze |grep -E '(postgres|psycopg2)'
```
Or
```
docker run -it apache/airflow:slim-2.3.3-python3.10
curl -O curl https://raw.githubusercontent.com/apache/airflow/constraints-2.3.3/constraints-3.10.txt
pip install -c constraints-3.10.txt apache-airflow-providers-postgres
pip freeze |grep -E '(postgres|psycopg2)'
```
Either way, the output is:
```
apache-airflow-providers-postgres==5.0.0
psycopg2-binary==2.9.3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25712 | https://github.com/apache/airflow/pull/25710 | 28165eef2ac26c66525849e7bebb55553ea5a451 | 14d56a5a9e78580c53cf85db504464daccffe21c | "2022-08-14T10:23:53Z" | python | "2022-08-23T15:08:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,671 | ["airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | `airflow dags test` command with run confs | ### Description
Currently, the command [`airflow dags test`](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#test) doesn't accept any configs to set run confs. We can do that in [`airflow dags trigger`](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#trigger) command through `--conf` argument.
The command `airflow dags test` is really useful when testing DAGs in local machines or CI/CD environment. Can we have that feature for the `airflow dags test` command?
### Use case/motivation
We may put run confs same as `airflow dags trigger` command does.
Example:
```
$ airflow dags test <DAG_ID> <EXECUTION_DATE> --conf '{"path": "some_path"}'
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25671 | https://github.com/apache/airflow/pull/25900 | bcdc25dd3fbda568b5ff2c04701623d6bf11a61f | bcc2fe26f6e0b7204bdf73f57d25b4e6c7a69548 | "2022-08-11T13:00:03Z" | python | "2022-08-29T08:51:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,669 | ["airflow/providers/atlassian/jira/CHANGELOG.rst", "airflow/providers/atlassian/jira/hooks/jira.py", "airflow/providers/atlassian/jira/operators/jira.py", "airflow/providers/atlassian/jira/provider.yaml", "airflow/providers/atlassian/jira/sensors/jira.py", "generated/provider_dependencies.json", "tests/providers/atlassian/jira/hooks/test_jira.py", "tests/providers/atlassian/jira/operators/test_jira.py", "tests/providers/atlassian/jira/sensors/test_jira.py"] | change Jira sdk to official atlassian sdk | ### Description
Jira is a product of atlassian https://www.atlassian.com/
There are
https://github.com/pycontribs/jira/issues
and
https://github.com/atlassian-api/atlassian-python-api
### Use case/motivation
Motivation is that now Airflow use unoffical SDK which is limited only to jira and can't also add operators for the other productions.
https://github.com/atlassian-api/atlassian-python-api is the official one and also contains more integrations to other atlassian products
https://github.com/atlassian-api/atlassian-python-api/issues/1027
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25669 | https://github.com/apache/airflow/pull/27633 | b5338b5825859355b017bed3586d5a42208f1391 | f3c68d7e153b8d417edf4cc4a68d18dbc0f30e64 | "2022-08-11T12:08:46Z" | python | "2022-12-07T12:48:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,653 | ["airflow/jobs/backfill_job.py", "tests/jobs/test_backfill_job.py"] | Deferrable Operators get stuck as "scheduled" during backfill | ### Apache Airflow version
2.3.3
### What happened
If you try to backfill a DAG that uses any [deferrable operators](https://airflow.apache.org/docs/apache-airflow/stable/concepts/deferring.html), those tasks will get indefinitely stuck in a "scheduled" state.
If I watch the Grid View, I can see the task state change: "scheduled" (or sometimes "queued") -> "deferred" -> "scheduled". I've tried leaving in this state for over an hour, but there are no further state changes.
When the task is stuck like this, the log appears as empty in the web UI. The corresponding log file *does* exist on the worker, but it does not contain any errors or warnings that might point to the source of the problem.
Ctrl-C-ing the backfill at this point seems to hang on "Shutting down LocalExecutor; waiting for running tasks to finish." **Force-killing and restarting the backfill will "unstick" the stuck tasks.** However, any deferrable operators downstream of the first will get back into that stuck state, requiring multiple restarts to get everything to complete successfully.
### What you think should happen instead
Deferrable operators should work as normal when backfilling.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
import logging
import pendulum
from airflow.decorators import dag, task
from airflow.sensors.time_sensor import TimeSensorAsync
logger = logging.getLogger(__name__)
@dag(
schedule_interval='@daily',
start_date=datetime.datetime(2022, 8, 10),
)
def test_backfill():
time_sensor = TimeSensorAsync(
task_id='time_sensor',
target_time=datetime.time(0).replace(tzinfo=pendulum.UTC), # midnight - should succeed immediately when the trigger first runs
)
@task
def some_task():
logger.info('hello')
time_sensor >> some_task()
dag = test_backfill()
if __name__ == '__main__':
dag.cli()
```
`airflow dags backfill test_backfill -s 2022-08-01 -e 2022-08-04`
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
Self-hosted/standalone
### Anything else
I was able to reproduce this with the following configurations:
* `standalone` mode + SQLite backend + `SequentialExecutor`
* `standalone` mode + Postgres backend + `LocalExecutor`
* Production deployment (self-hosted) + Postgres backend + `CeleryExecutor`
I have not yet found anything telling in any of the backend logs.
Possibly related:
* #23693
* #23145
* #13542
- A modified version of the workaround mentioned in [this comment](https://github.com/apache/airflow/issues/13542#issuecomment-1011598836) works to unstick the first stuck task. However if you run it multiple times to try to unstick any downstream tasks, it causes the backfill command to crash.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25653 | https://github.com/apache/airflow/pull/26205 | f01eed6490acd3bb3a58824e7388c4c3cd50ae29 | 3396d1f822caac7cbeb14e1e67679b8378a84a6c | "2022-08-10T19:19:21Z" | python | "2022-09-23T05:08:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,641 | ["airflow/www/templates/airflow/dag_audit_log.html", "airflow/www/views.py"] | Improve audit log | ### Discussed in https://github.com/apache/airflow/discussions/25638
See the discussion. There are a couple of improvements that can be done:
* add atribute to download the log rather than open it in-browser
* add .log or similar (.txt?) extension
* sort the output
* possibly more
<div type='discussions-op-text'>
<sup>Originally posted by **V0lantis** August 10, 2022</sup>
### Apache Airflow version
2.3.3
### What happened
The audit log link crashes because there is too much data displayed.
### What you think should happen instead
The windows shouldn't crash
### How to reproduce
Display a dag audit log with thousand or millions lines should do the trick
### Operating System
```
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
```
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==4.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.1.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-github==2.0.0
apache-airflow-providers-google==8.1.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jira==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-postgres==5.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-slack==5.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
apache-airflow-providers-tableau==3.0.0
apache-airflow-providers-zendesk==4.0.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
k8s
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/25641 | https://github.com/apache/airflow/pull/25856 | 634b9c03330c8609949f070457e7b99a6e029f26 | 50016564fa6ab6c6b02bdb0c70fccdf9b75c2f10 | "2022-08-10T13:42:53Z" | python | "2022-08-23T00:31:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,588 | ["airflow/models/mappedoperator.py", "tests/models/test_mappedoperator.py"] | Mapped KubernetesPodOperater not rendering nested templates | ### Apache Airflow version
2.3.3
### What happened
Nested values, such as `env_vars` for the `KubernetesPodOperater` are not being rendered when used as a dynamically mapped operator.
Assuming the following:
```python
op = KubernetesPodOperater.partial(
env_vars=[k8s.V1EnvVar(name='AWS_ACCESS_KEY_ID', value='{{ var.value.aws_access_key_id }}')],
# Other arguments
).expand(arguments=[[1], [2]])
```
The *Rendered Template* results for `env_vars` should be:
```
("[{'name': 'AWS_ACCESS_KEY_ID', 'value': 'some-super-secret-value', 'value_from': None}]")
```
Instead the actual *Rendered Template* results for `env_vars` are un-rendered:
```
("[{'name': 'AWS_ACCESS_KEY_ID', 'value': '{{ var.value.aws_access_key_id }}', 'value_from': None}]")
```
This is probably caused by the fact that `MappedOperator` is not calling [`KubernetesPodOperater._render_nested_template_fields`](https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L286).
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25588 | https://github.com/apache/airflow/pull/25599 | 762588dcf4a05c47aa253b864bda00726a5569dc | ed39703cd4f619104430b91d7ba67f261e5bfddb | "2022-08-08T06:17:20Z" | python | "2022-08-15T12:02:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,580 | [".github/workflows/ci.yml", "BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_testing.svg", "images/breeze/output_testing_helm-tests.svg", "images/breeze/output_testing_tests.svg", "scripts/in_container/check_environment.sh"] | Convet running Helm unit tests to use the new breeze | The unit tests of Helm (using `helm template` still use bash scripts not the new breeze - we should switch them). | https://github.com/apache/airflow/issues/25580 | https://github.com/apache/airflow/pull/25581 | 0d34355ffa3f9f2ecf666d4518d36c4366a4c701 | a562cc396212e4d21484088ac5f363ade9ac2b8d | "2022-08-07T13:24:26Z" | python | "2022-08-08T06:56:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,555 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | Airflow doesn't re-use a secrets backend instance when loading configuration values | ### Apache Airflow version
main (development)
### What happened
When airflow is loading its configuration, it creates a new secrets backend instance for each configuration backend it loads from secrets and then additionally creates a global secrets backend instance that is used in `ensure_secrets_loaded` which code outside of the configuration file uses. This can cause issues with the vault backend (and possibly others, not sure) since logging in to vault can be an expensive operation server-side and each instance of the vault secrets backend needs to re-login to use its internal client.
### What you think should happen instead
Ideally, airflow would attempt to create a single secrets backend instance and re-use this. This can possibly be patched in the vault secrets backend, but instead I think updating the `configuration` module to cache the secrets backend would be preferable since it would then apply to any secrets backend.
### How to reproduce
Use the hashicorp vault secrets backend and store some configuration in `X_secret` values. See that it logs in more than you'd expect.
### Operating System
Ubuntu 18.04
### Versions of Apache Airflow Providers
```
apache-airflow==2.3.0
apache-airflow-providers-hashicorp==2.2.0
hvac==0.11.2
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25555 | https://github.com/apache/airflow/pull/25556 | 33fbe75dd5100539c697d705552b088e568d52e4 | 5863c42962404607013422a40118d8b9f4603f0b | "2022-08-05T16:13:36Z" | python | "2022-08-06T14:21:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,523 | ["airflow/www/static/js/graph.js"] | Mapped, classic operator tasks within TaskGroups prepend `group_id` in Graph View | ### Apache Airflow version
main (development)
### What happened
When mapped, classic operator tasks exist within TaskGroups, the `group_id` of the TaskGroup is prepended to the displayed `task_id` in the Graph View.
In the below screenshot, all displayed task IDs only contain the direct `task_id` except for the "mapped_classic_task". This particular task is a mapped `BashOperator` task. The prepended `group_id` does not appear for unmapped, classic operator tasks, nor mapped and unmapped TaskFlow tasks.
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/48934154/182760586-975a7886-bcd6-477d-927b-25e82139b5b7.png">
### What you think should happen instead
The pattern of the displayed task names should be consistent for all task types (mapped/unmapped, classic operators/TaskFlow functions). Additionally, having the `group_id` prepended to the mapped, classic operator tasks is a little redundant and less readable.
### How to reproduce
1. Use an example DAG of the following:
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.operators.bash import BashOperator
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_group_task_graph():
@task_group
def my_task_group():
BashOperator(task_id="not_mapped_classic_task", bash_command="echo")
BashOperator.partial(task_id="mapped_classic_task").expand(
bash_command=["echo", "echo hello", "echo world"]
)
@task
def another_task(input=None):
...
another_task.override(task_id="not_mapped_taskflow_task")()
another_task.override(task_id="mapped_taskflow_task").expand(input=[1, 2, 3])
my_task_group()
_ = task_group_task_graph()
```
2. Navigate to the Graph view
3. Notice that the `task_id` for the "mapped_classic_task" prepends the TaskGroup `group_id` of "my_task_group" while the other tasks in the TaskGroup do not.
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Breeze
### Anything else
Setting `prefix_group_id=False` for the TaskGroup does remove the prepended `group_id` from the tasks display name.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25523 | https://github.com/apache/airflow/pull/26108 | 5697e9fdfa9d5af2d48f7037c31972c2db1f4397 | 3b76e81bcc9010cfec4d41fe33f92a79020dbc5b | "2022-08-04T04:13:48Z" | python | "2022-09-01T16:32:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,512 | ["airflow/www/static/js/dag/grid/index.tsx"] | Vertical overlay scrollbar on Grid view blocks last DAG run column | ### Apache Airflow version
2.3.3 (latest released)
### What happened
The vertical overlay scrollbar in Grid view on the Web UI (#22134) covers up the final DAG run column and makes it impossible to click on the tasks for that DAG run:


### What you think should happen instead
Either pad the Grid view so the scrollbar does not appear on top of the content or force the scroll bar to take up its own space
### How to reproduce
Have a DAG run with enough tasks to cause vertical overflow. Found on Linux + FF 102
### Operating System
Fedora 36
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25512 | https://github.com/apache/airflow/pull/25554 | 5668888a7e1074a620b3d38f407ecf1aa055b623 | fe9772949eba35c73101c3cd93a7c76b3e633e7e | "2022-08-03T16:10:55Z" | python | "2022-08-05T16:46:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,508 | ["airflow/migrations/versions/0118_2_5_0_add_updated_at_to_dagrun_and_ti.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/models/test_taskinstance.py"] | add lastModified columns to DagRun and TaskInstance. | I wonder if we should add lastModified columns to DagRun and TaskInstance. It might help a lot of UI/API queries.
_Originally posted by @ashb in https://github.com/apache/airflow/issues/23805#issuecomment-1143752368_ | https://github.com/apache/airflow/issues/25508 | https://github.com/apache/airflow/pull/26252 | 768865e10c811bc544590ec268f9f5c334da89b5 | 4930df45f5bae89c297dbcd5cafc582a61a0f323 | "2022-08-03T14:49:55Z" | python | "2022-09-19T13:28:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,493 | ["airflow/www/views.py", "tests/www/views/test_views_base.py", "tests/www/views/test_views_home.py"] | URL contains tag query parameter but Airflow UI does not correctly visualize the tags | ### Apache Airflow version
2.3.3 (latest released)
### What happened
An URL I saved in the past, `https://astronomer.astronomer.run/dx4o2568/home?tags=test`, has the tag field in the query parameter though I was not aware of this. When I clicked on the URL, I was confused because I did not see any DAGs when I should have a bunch.
After closer inspection, I realized that the URL has the tag field in the query parameter but then noticed that the tag box in the Airflow UI wasn't properly populated.

### What you think should happen instead
When I clicked on the URL, the tag box should have been populated with the strings in the URL.
### How to reproduce
Start an Airflow deployment with some DAGs and add the tag query parameter. More specifically, it has to be a tag that is not used by any DAG.
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25493 | https://github.com/apache/airflow/pull/25715 | ea306c9462615d6b215d43f7f17d68f4c62951b1 | 485142ac233c4ac9627f523465b7727c2d089186 | "2022-08-03T00:03:45Z" | python | "2022-11-24T10:27:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,492 | ["airflow/api_connexion/endpoints/plugin_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/plugin_schema.py", "airflow/www/static/js/types/api-generated.ts"] | API server /plugin crashes | ### Apache Airflow version
2.3.3 (latest released)
### What happened
The `/plugins` endpoint returned a 500 http status code.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
{
"detail": "\"{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}\" is not of type 'object'\n\nFailed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:\n {'nullable': True, 'type': 'object'}\n\nOn instance['plugins'][0]['appbuilder_views'][0]:\n (\"{'name': 'Test View', 'category': 'Test Plugin', 'view': \"\n \"'test.appbuilder_views.TestAppBuilderBaseView'}\")",
"status": 500,
"title": "Response body does not conform to specification",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/Unknown"
}
```
The error message in the webserver is as followed
```
[2022-08-03 17:07:57,705] {validation.py:244} ERROR - http://localhost:8080/api/v1/plugins?limit=1 validation error: "{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}" is not of type 'object'
Failed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:
{'nullable': True, 'type': 'object'}
On instance['plugins'][0]['appbuilder_views'][0]:
("{'name': 'Test View', 'category': 'Test Plugin', 'view': "
"'test.appbuilder_views.TestAppBuilderBaseView'}")
172.18.0.1 - admin [03/Aug/2022:17:10:17 +0000] "GET /api/v1/plugins?limit=1 HTTP/1.1" 500 733 "-" "curl/7.79.1"
```
### What you think should happen instead
The response should contain all the plugins integrated with Airflow.
### How to reproduce
Create a simple plugin in the plugin directory.
`appbuilder_views.py`
```
from flask_appbuilder import expose, BaseView as AppBuilderBaseView
# Creating a flask appbuilder BaseView
class TestAppBuilderBaseView(AppBuilderBaseView):
@expose("/")
def test(self):
return self.render_template("test_plugin/test.html", content="Hello galaxy!")
```
`plugin.py`
```
from airflow.plugins_manager import AirflowPlugin
from test.appbuilder_views import TestAppBuilderBaseView
class TestPlugin(AirflowPlugin):
name = "test"
appbuilder_views = [
{
"name": "Test View",
"category": "Test Plugin",
"view": TestAppBuilderBaseView()
}
]
```
Call the `/plugin` endpoint.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
```
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25492 | https://github.com/apache/airflow/pull/25524 | 7e3d2350dbb23b9c98bbadf73296425648e1e42d | 5de11e1410b432d632e8c0d1d8ca0945811a56f0 | "2022-08-02T23:44:07Z" | python | "2022-08-04T15:37:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,395 | ["airflow/providers/snowflake/provider.yaml", "airflow/providers/snowflake/transfers/copy_into_snowflake.py", "airflow/providers/snowflake/transfers/s3_to_snowflake.py", "scripts/in_container/verify_providers.py", "tests/providers/snowflake/transfers/test_copy_into_snowflake.py"] | GCSToSnowflakeOperator with feature parity to the S3ToSnowflakeOperator | ### Description
Require an operator similar to the S3ToSnowflakeOperator but for GCS to load data stored in GCS to a Snowflake table.
### Use case/motivation
Same as the S3ToSnowflakeOperator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25395 | https://github.com/apache/airflow/pull/25541 | 2ee099655b1ca46935dbf3e37ae0ec1139f98287 | 5c52bbf32d81291b57d051ccbd1a2479ff706efc | "2022-07-29T10:23:52Z" | python | "2022-08-26T22:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,360 | ["airflow/models/abstractoperator.py", "airflow/models/baseoperator.py", "airflow/operators/trigger_dagrun.py", "airflow/providers/qubole/operators/qubole.py", "airflow/www/static/js/dag.js", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "docs/spelling_wordlist.txt"] | Extra Links do not works with mapped operators | ### Apache Airflow version
main (development)
### What happened
I found that Extra Links do not work with dynamic tasks at all - links inaccessible, but same Extra Links works fine with not mapped operators.
I think the nature of that extra links assign to parent task instance (i do not know how to correct name this TI) but not to actual mapped TIs.
As result we only have `number extra links defined` in operator not `(number extra links defined in operator) x number of mapped TIs.`
### What you think should happen instead
_No response_
### How to reproduce
```python
from pendulum import datetime
from airflow.decorators import dag
from airflow.sensors.external_task import ExternalTaskSensor
from airflow.operators.empty import EmptyOperator
EXTERNAL_DAG_IDS = [f"example_external_dag_{ix:02d}" for ix in range(3)]
DAG_KWARGS = {
"start_date": datetime(2022, 7, 1),
"schedule_interval": "@daily",
"catchup": False,
"tags": ["mapped_extra_links", "AIP-42", "serialization"],
}
def external_dags():
EmptyOperator(task_id="dummy")
@dag(**DAG_KWARGS)
def external_regular_task_sensor():
for external_dag_id in EXTERNAL_DAG_IDS:
ExternalTaskSensor(
task_id=f'wait_for_{external_dag_id}',
external_dag_id=external_dag_id,
poke_interval=5,
)
@dag(**DAG_KWARGS)
def external_mapped_task_sensor():
ExternalTaskSensor.partial(
task_id='wait',
poke_interval=5,
).expand(external_dag_id=EXTERNAL_DAG_IDS)
dag_external_regular_task_sensor = external_regular_task_sensor()
dag_external_mapped_task_sensor = external_mapped_task_sensor()
for dag_id in EXTERNAL_DAG_IDS:
globals()[dag_id] = dag(dag_id=dag_id, **DAG_KWARGS)(external_dags)()
```
https://user-images.githubusercontent.com/3998685/180994213-847b3fd3-d351-4836-b246-b54056f34ad6.mp4
### Operating System
macOs 12.5
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25360 | https://github.com/apache/airflow/pull/25500 | 4ecaa9e3f0834ca0ef08002a44edda3661f4e572 | d9e924c058f5da9eba5bb5b85a04bfea6fb2471a | "2022-07-28T10:44:40Z" | python | "2022-08-05T03:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,344 | ["airflow/models/abstractoperator.py", "tests/models/test_baseoperator.py"] | Improve Airflow logging for operator Jinja template processing | ### Description
When an operator uses Jinja templating, debugging issues is difficult because the Airflow task log only displays a stack trace.
### Use case/motivation
When there's a templating issue, I'd like to have some specific, actionable info to help understand the problem. At minimum:
* Which operator or task had the issue?
* Which field had the issue?
* What was the Jinja template?
Possibly also the Jinja context, although that can be very verbose.
I have prototyped this in my local Airflow dev environment, and I propose something like the following. (Note the logging commands, which are not present in the Airflow repo.)
Please let me know if this sounds reasonable, and I will be happy to create a PR.
```
def _do_render_template_fields(
self,
parent,
template_fields,
context,
jinja_env,
seen_oids,
) -> None:
"""Copied from Airflow 2.2.5 with added logging."""
logger.info(f"BaseOperator._do_render_template_fields(): Task {self.task_id}")
for attr_name in template_fields:
content = getattr(parent, attr_name)
if content:
logger.info(f"Rendering template for '{attr_name}' field: {content!r}")
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
+ setattr(parent, attr_name, rendered_content)
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25344 | https://github.com/apache/airflow/pull/25452 | 9c632684341fb3115d654aecb83aa951d80b19af | 4da2b0c216c92795f19862a3ff6634e5a5936138 | "2022-07-27T15:46:39Z" | python | "2022-08-02T19:40:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,295 | ["airflow/models/param.py", "tests/models/test_param.py"] | ParamsDict represents the class object itself, not keys and values on Task Instance Details | ### Apache Airflow version
2.3.3 (latest released)
### What happened
ParamsDict's printable presentation shows the class object itself like `<airflow.models.param.ParamsDict object at 0x7fd0eba9bb80>` on the page of Task Instance Detail because it does not have `__repr__` method in its class.
<img width="791" alt="image" src="https://user-images.githubusercontent.com/16971553/180902761-88b9dd9f-7102-4e49-b8b8-0282b31dda56.png">
It used to be `dict` object and what keys and values Params include are shown on UI before replacing Params with the advanced Params by #17100.
### What you think should happen instead
It was originally shown below when it was `dict` object.

I think it can be fixed by adding `__repr__` method to the class like below.
```python
class ParamsDict(dict):
...
def __repr__(self):
return f"{self.dump()}"
```
### How to reproduce
I guess it all happens on Airflow using 2.2.0+
### Operating System
Linux, but it's not depending on OS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25295 | https://github.com/apache/airflow/pull/25305 | 285c23a2f90f4c765053aedbd3f92c9f58a84d28 | df388a3d5364b748993e61b522d0b68ff8b8124a | "2022-07-26T01:51:45Z" | python | "2022-07-27T07:13:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,274 | ["airflow/providers/common/sql/hooks/sql.py", "tests/providers/common/sql/hooks/test_dbapi.py"] | Apache Airflow SqlSensor DbApiHook Error | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I trying to make SqlSensor to work with Oracle database, I've installed all the required provider and successfully tested the connection. When I run SqlSensor I got this error message
`ERROR - Failed to execute job 32 for task check_exec_date (The connection type is not supported by SqlSensor. The associated hook should be a subclass of `DbApiHook`. Got OracleHook; 419)`
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-oracle==3.2.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-sqlite==3.0.0
### Deployment
Other
### Deployment details
Run on Windows Subsystem for Linux
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25274 | https://github.com/apache/airflow/pull/25293 | 7e295b7d992f4ed13911e593f15fd18e0d4c16f6 | b0fd105f4ade9933476470f6e247dd5fa518ffc9 | "2022-07-25T08:47:00Z" | python | "2022-07-27T22:11:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,271 | ["airflow/plugins_manager.py", "airflow/utils/entry_points.py", "tests/plugins/test_plugins_manager.py", "tests/utils/test_entry_points.py", "tests/www/views/test_views.py"] | Version 2.3.3 breaks "Plugins as Python packages" feature | ### Apache Airflow version
2.3.3 (latest released)
### What happened
In 2.3.3
If I use https://airflow.apache.org/docs/apache-airflow/stable/plugins.html#plugins-as-python-packages feature, then I see these Error:
short:
`ValueError: The name 'airs' is already registered for this blueprint. Use 'name=' to provide a unique name.`
long:
> i'm trying to reproduce it...
If I don't use it(workarounding by AIRFLOW__CORE__PLUGINS_FOLDER), errors doesn't occur.
It didn't happend in 2.3.2 and earlier
### What you think should happen instead
Looks like plugins are import multiple times if it is plugins-as-python-packages.
Perhaps flask's major version change is the main cause.
Presumably, in flask 1.0, duplicate registration of blueprint was quietly filtered out, but in 2.0 it seems to have been changed to generate an error. (I am trying to find out if this hypothesis is correct)
Anyway, use the latest version of FAB is important. we will have to adapt to this change, so plugins will have to be imported once regardless how it defined.
### How to reproduce
> It was reproduced in the environment used at work, but it is difficult to disclose or explain it.
> I'm working to reproduce it with the breeze command, and I open the issue first with the belief that it's not just me.
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
```sh
$ SHIV_INTERPRETER=1 airsflow -m pip freeze | grep apache-
apache-airflow==2.3.3
apache-airflow-providers-apache-hive==3.1.0
apache-airflow-providers-apache-spark==2.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.0.0
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-postgres==5.1.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sqlite==3.1.0
```
but I think these are irrelevant.
### Deployment
Other 3rd-party Helm chart
### Deployment details
docker image based on centos7, python 3.9.10 interpreter, self-written helm2 chart ....
... but I think these are irrelevant.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25271 | https://github.com/apache/airflow/pull/25296 | cd14f3f65ad5011058ab53f2119198d6c082e82c | c30dc5e64d7229cbf8e9fbe84cfa790dfef5fb8c | "2022-07-25T07:11:29Z" | python | "2022-08-03T13:01:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,241 | ["airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Add has_dataset_outlets in /grid_data | Return `has_dataset_outlets` in /grid_data so we can know whether to check for downstream dataset events in grid view.
Also: add `operator`
Also be mindful of performance on those endpoints (e.g do things in a bulk query), and it should be part of the acceptance criteria. | https://github.com/apache/airflow/issues/25241 | https://github.com/apache/airflow/pull/25323 | e994f2b0201ca9dfa3397d22b5ac9d10a11a8931 | d2df9fe7860d1e795040e40723828c192aca68be | "2022-07-22T19:28:28Z" | python | "2022-07-28T10:34:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,240 | ["airflow/www/forms.py", "tests/www/views/test_views_connection.py"] | Strip white spaces from values entered into fields in Airflow UI Connections form | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I accidentally (and then intentionally) added leading and trailing white spaces while adding connection parameters in Airflow UI Connections form. What followed was an error message that was not so helpful in tracking down the input error by the user.
### What you think should happen instead
Ideally, I expected that there should be a frontend or backend logic that strips off accidental leading or trailing white spaces when adding Connections parameters in Airflow.
### How to reproduce
Intentionally add leading or trailing white spaces while adding Connections parameters.
<img width="981" alt="Screenshot 2022-07-22 at 18 49 54" src="https://user-images.githubusercontent.com/9834450/180497315-0898d803-c104-4d93-b464-c0b33a466b4d.png">
### Operating System
Mac OS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25240 | https://github.com/apache/airflow/pull/32292 | 410d0c0f86aaec71e2c0050f5adbc53fb7b441e7 | 394cedb01abd6539f6334a40757bf186325eb1dd | "2022-07-22T18:02:47Z" | python | "2023-07-11T20:04:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,210 | ["airflow/datasets/manager.py", "airflow/models/dataset.py", "tests/datasets/test_manager.py", "tests/models/test_taskinstance.py"] | Many tasks updating dataset at once causes some of them to fail | ### Apache Airflow version
main (development)
### What happened
I have 16 dags which all update the same dataset. They're set to finish at the same time (when the seconds on the clock are 00). About three quarters of them behave as expected, but the other quarter fails with errors like:
```
[2022-07-21, 06:06:00 UTC] {standard_task_runner.py:97} ERROR - Failed to execute job 8 for task increment_source ((psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dataset_dag_run_queue_pkey"
DETAIL: Key (dataset_id, target_dag_id)=(1, simple_dataset_sink) already exists.
[SQL: INSERT INTO dataset_dag_run_queue (dataset_id, target_dag_id, created_at) VALUES (%(dataset_id)s, %(target_dag_id)s, %(created_at)s)]
[parameters: {'dataset_id': 1, 'target_dag_id': 'simple_dataset_sink', 'created_at': datetime.datetime(2022, 7, 21, 6, 6, 0, 131730, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 375)
```
I've prepaired a gist with the details: https://gist.github.com/MatrixManAtYrService/b5e58be0949eab9180608d0760288d4d
### What you think should happen instead
All dags should succeed
### How to reproduce
See this gist: https://gist.github.com/MatrixManAtYrService/b5e58be0949eab9180608d0760288d4d
Summary: Unpause all of the dags which we expect to collide, wait two minutes. Some will have collided.
### Operating System
docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start` targeting commit: cff7d9194f549d801947f47dfce4b5d6870bfaaa
be sure to have `pause` in requirements.txt
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25210 | https://github.com/apache/airflow/pull/26103 | a2db8fcb7df1a266e82e17b937c9c1cf01a16a42 | 4dd628c26697d759aebb81a7ac2fe85a79194328 | "2022-07-21T06:28:32Z" | python | "2022-09-01T20:28:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,200 | ["airflow/models/baseoperator.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/ti_deps/dep_context.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py", "tests/ti_deps/deps/test_trigger_rule_dep.py"] | DAG Run fails when chaining multiple empty mapped tasks | ### Apache Airflow version
2.3.3 (latest released)
### What happened
On Kubernetes Executor and Local Executor (others not tested) a significant fraction of the DAG Runs of a DAG that has two consecutive mapped tasks which are are being passed an empty list are marked as failed when all tasks are either succeeding or being skipped.

### What you think should happen instead
The DAG Run should be marked success.
### How to reproduce
Run the following DAG on Kubernetes Executor or Local Executor.
The real world version of this DAG has several mapped tasks that all point to the same list, and that list is frequently empty. I have made a minimal reproducible example.
```py
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="break_mapping", start_date=datetime(2022, 3, 4)) as dag:
@task
def add_one(x: int):
return x + 1
@task
def say_hi():
print("Hi")
added_values = add_one.expand(x=[])
added_more_values = add_one.expand(x=[])
say_hi() >> added_values
added_values >> added_more_values
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!4.0.0
apache-airflow-providers-cncf-kubernetes==1!4.1.0
apache-airflow-providers-elasticsearch==1!4.0.0
apache-airflow-providers-ftp==1!3.0.0
apache-airflow-providers-google==1!8.1.0
apache-airflow-providers-http==1!3.0.0
apache-airflow-providers-imap==1!3.0.0
apache-airflow-providers-microsoft-azure==1!4.0.0
apache-airflow-providers-mysql==1!3.0.0
apache-airflow-providers-postgres==1!5.0.0
apache-airflow-providers-redis==1!3.0.0
apache-airflow-providers-slack==1!5.0.0
apache-airflow-providers-sqlite==1!3.0.0
apache-airflow-providers-ssh==1!3.0.0
```
### Deployment
Astronomer
### Deployment details
Local was tested on docker compose (from astro-cli)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25200 | https://github.com/apache/airflow/pull/25995 | 1e19807c7ea0d7da11b224658cd9a6e3e7a14bc5 | 5697e9fdfa9d5af2d48f7037c31972c2db1f4397 | "2022-07-20T20:33:42Z" | python | "2022-09-01T12:03:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,179 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "airflow/providers/apache/livy/sensors/livy.py", "tests/providers/apache/livy/hooks/test_livy.py"] | Add auth_type to LivyHook | ### Apache Airflow Provider(s)
apache-livy
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-livy==3.0.0
### Apache Airflow version
2.3.3 (latest released)
### Operating System
Ubuntu 18.04
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### What happened
This is a feature request as apposed to an issue.
I want to use the `LivyHook` to communicate with a Kerberized cluster.
As such, I am using `requests_kerberos.HTTPKerberosAuth` as the authentication type.
Currently, I am implementing this as follows:
```python
from airflow.providers.apache.livy.hooks.livy import LivyHook as NativeHook
from requests_kerberos import HTTPKerberosAuth as NativeAuth
class HTTPKerberosAuth(NativeAuth):
def __init__(self, *ignore_args, **kwargs):
super().__init__(**kwargs)
class LivyHook(NativeHook):
def __init__(self, auth_type=HTTPKerberosAuth, **kwargs):
super().__init__(**kwargs)
self.auth_type = auth_type
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25179 | https://github.com/apache/airflow/pull/25183 | ae7bf474109410fa838ab2728ae6d581cdd41808 | 7d3e799f7e012d2d5c1fe24ce2bea01e68a5a193 | "2022-07-20T10:09:03Z" | python | "2022-08-07T13:49:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,165 | ["airflow/decorators/base.py", "tests/decorators/test_mapped.py", "tests/utils/test_task_group.py"] | Dynamic Tasks inside of TaskGroup do not have group_id prepended to task_id | ### Apache Airflow version
2.3.3 (latest released)
### What happened
As the title states, if you have dynamically mapped tasks inside of a `TaskGroup`, those tasks do not get the `group_id` prepended to their respective `task_id`s. This causes at least a couple of undesirable side effects:
1. Task names are truncated in Grid/Graph* View. The tasks below are named `plus_one` and `plus_two`:


Presumably this is because the UI normally strips off the `group_id` prefix.
\* Graph View was very inconsistent in my experience. Sometimes the names are truncated, and sometimes they render correctly. I haven't figured out the pattern behind this behavior.
2. Duplicate `task_id`s between groups result in a `airflow.exceptions.DuplicateTaskIdFound`, even if the `group_id` would normally disambiguate them.
### What you think should happen instead
These dynamic tasks inside of a group should have the `group_id` prepended for consistent behavior.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag, task
from airflow.utils.task_group import TaskGroup
@dag(
start_date=datetime.datetime(2022, 7, 19),
schedule_interval=None,
)
def test_dag():
with TaskGroup(group_id='group'):
@task
def plus_one(x: int):
return x + 1
plus_one.expand(x=[1, 2, 3])
with TaskGroup(group_id='ggg'):
@task
def plus_two(x: int):
return x + 2
plus_two.expand(x=[1, 2, 3])
dag = test_dag()
if __name__ == '__main__':
dag.cli()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
Possibly related: #12309
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25165 | https://github.com/apache/airflow/pull/26081 | 6a8f0167436b8b582aeb92a93d3f69d006b36f7b | 9c4ab100e5b069c86bd00bb7860794df0e32fc2e | "2022-07-19T18:58:28Z" | python | "2022-09-01T08:46:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,163 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | Common-SQL Operators Various Bugs | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
`apache-airflow-providers-common-sql==1.0.0`
### Apache Airflow version
2.3.3 (latest released)
### Operating System
macOS Monterey 12.3.1
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
- `SQLTableCheckOperator` builds multiple checks in such a way that if two or more checks are given, and one is not a fully aggregated statement, then the SQL fails as it is missing a `GROUP BY` clause.
- `SQLColumnCheckOperator` provides only the last SQL query built from the columns, so when a check fails, it will only give the correct SQL in the exception statement by coincidence.
### What you think should happen instead
- Multiple checks should not need a `GROUP BY` clause
- Either the correct SQL statement, or no SQL statement, should be returned in the exception message.
### How to reproduce
For the `SQLTableCheckOperator`, using the operator like so:
```
table_cheforestfire_costs_table_checkscks = SQLTableCheckOperator(
task_id="forestfire_costs_table_checks",
table=SNOWFLAKE_FORESTFIRE_COST_TABLE,
checks={
"row_count_check": {"check_statement": "COUNT(*) = 9"},
"total_cost_check": {"check_statement": "land_damage_cost + property_damage_cost + lost_profits_cost = total_cost"}
}
)
```
For the `SQLColumnCheckOperator`, using the operator like so:
```
cost_column_checks = SQLColumnCheckOperator(
task_id="cost_column_checks",
table=SNOWFLAKE_COST_TABLE,
column_mapping={
"ID": {"null_check": {"equal_to": 0}},
"LAND_DAMAGE_COST": {"min": {"geq_to": 0}},
"PROPERTY_DAMAGE_COST": {"min": {"geq_to": 0}},
"LOST_PROFITS_COST": {"min": {"geq_to": 0}},
}
)
```
and ensuring that any of the `ID`, `LAND_DAMAGE_COST`, or `PROPERTY_DAMAGE_COST` checks fail.
An example DAG with the correct environment and data can be found [here](https://github.com/astronomer/airflow-data-quality-demo/blob/main/dags/snowflake_examples/complex_snowflake_transform.py).
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25163 | https://github.com/apache/airflow/pull/25164 | d66e427c4d21bc479caa629299a786ca83747994 | be7cb1e837b875f44fcf7903329755245dd02dc3 | "2022-07-19T18:18:01Z" | python | "2022-07-22T14:01:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,149 | ["airflow/models/dagbag.py", "airflow/www/security.py", "tests/models/test_dagbag.py", "tests/www/views/test_views_home.py"] | DAG.access_control can't sync when clean access_control | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I change my DAG from
```python
with DAG(
'test',
access_control={'team':{'can_edit','can_read'}},
) as dag:
...
```
to
```python
with DAG(
'test',
) as dag:
...
```
Remove `access_control` arguments, Scheduler can't sync permissions to db.
If we write code like this,
```python
with DAG(
'test',
access_control = {'team': {}}
) as dag:
...
```
It works.
### What you think should happen instead
It should clear permissions to `test` DAG on Role `team`.
I think we should give a consistent behaviour of permissions sync. If we give `access_control` argument, permissions assigned in Web will clear when we update DAG file.
### How to reproduce
_No response_
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
```
airflow-code-editor==5.2.2
apache-airflow==2.3.3
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-psrp==2.0.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-samba==4.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
```
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25149 | https://github.com/apache/airflow/pull/30340 | 97ad7cee443c7f4ee6c0fbaabcc73de16f99a5e5 | 2c0c8b8bfb5287e10dc40b73f326bbf9a0437bb1 | "2022-07-19T09:37:48Z" | python | "2023-04-26T14:11:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,103 | ["airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | API `variables/{variable_key}` request fails if key has character `/` | ### Apache Airflow version
2.3.2
### What happened
Created a variable e.g. `a/variable` and couldn't get or delete it
### What you think should happen instead
i shouldn't've been allowed to create a variable with `/`, or the GET and DELETE should work
### How to reproduce


```
DELETE /variables/{variable_key}
GET /variables/{variable_key}
```
create a variable with `/`, and then try and get it. the get will 404, even after html escape. delete also fails
`GET /variables/` works just fine
### Operating System
astro
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25103 | https://github.com/apache/airflow/pull/25774 | 98aac5dc282b139f0e726aac512b04a6693ba83d | a1beede41fb299b215f73f987a572c34f628de36 | "2022-07-15T21:22:11Z" | python | "2022-08-18T06:08:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,095 | ["airflow/models/taskinstance.py", "airflow/models/taskreschedule.py", "airflow/serialization/serialized_objects.py", "airflow/ti_deps/deps/ready_to_reschedule.py", "tests/models/test_taskinstance.py", "tests/serialization/test_dag_serialization.py", "tests/ti_deps/deps/test_ready_to_reschedule_dep.py"] | Dynamically mapped sensor with mode='reschedule' fails with violated foreign key constraint | ### Apache Airflow version
2.3.3 (latest released)
### What happened
If you are using [Dynamic Task Mapping](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dynamic-task-mapping.html) to map a Sensor with `.partial(mode='reschedule')`, and if that sensor fails its poke condition even once, the whole sensor task will immediately die with an error like:
```
[2022-07-14, 10:45:05 CDT] {standard_task_runner.py:92} ERROR - Failed to execute job 19 for task check_reschedule ((sqlite3.IntegrityError) FOREIGN KEY constraint failed
[SQL: INSERT INTO task_reschedule (task_id, dag_id, run_id, map_index, try_number, start_date, end_date, duration, reschedule_date) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: ('check_reschedule', 'test_dag', 'manual__2022-07-14T20:44:02.708517+00:00', -1, 1, '2022-07-14 20:45:05.874988', '2022-07-14 20:45:05.900895', 0.025907, '2022-07-14 20:45:10.898820')]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 2973372)
```
A similar error arises when using a Postgres backend:
```
[2022-07-14, 11:09:22 CDT] {standard_task_runner.py:92} ERROR - Failed to execute job 17 for task check_reschedule ((psycopg2.errors.ForeignKeyViolation) insert or update on table "task_reschedule" violates foreign key constraint "task_reschedule_ti_fkey"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(test_dag, check_reschedule, manual__2022-07-14T21:08:13.462782+00:00, -1) is not present in table "task_instance".
[SQL: INSERT INTO task_reschedule (task_id, dag_id, run_id, map_index, try_number, start_date, end_date, duration, reschedule_date) VALUES (%(task_id)s, %(dag_id)s, %(run_id)s, %(map_index)s, %(try_number)s, %(start_date)s, %(end_date)s, %(duration)s, %(reschedule_date)s) RETURNING task_reschedule.id]
[parameters: {'task_id': 'check_reschedule', 'dag_id': 'test_dag', 'run_id': 'manual__2022-07-14T21:08:13.462782+00:00', 'map_index': -1, 'try_number': 1, 'start_date': datetime.datetime(2022, 7, 14, 21, 9, 22, 417922, tzinfo=Timezone('UTC')), 'end_date': datetime.datetime(2022, 7, 14, 21, 9, 22, 464495, tzinfo=Timezone('UTC')), 'duration': 0.046573, 'reschedule_date': datetime.datetime(2022, 7, 14, 21, 9, 27, 458623, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 2983150)
```
`mode='poke'` seems to behave as expected. As far as I can tell, this affects all Sensor types.
### What you think should happen instead
This combination of features should work without error.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag, task
from airflow.sensors.bash import BashSensor
@dag(
start_date=datetime.datetime(2022, 7, 14),
schedule_interval=None,
)
def test_dag():
@task
def get_tasks():
return ['(($RANDOM % 2 == 0))'] * 10
tasks = get_tasks()
BashSensor.partial(task_id='check_poke', mode='poke', poke_interval=5).expand(bash_command=tasks)
BashSensor.partial(task_id='check_reschedule', mode='reschedule', poke_interval=5).expand(bash_command=tasks)
dag = test_dag()
if __name__ == '__main__':
dag.cli()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25095 | https://github.com/apache/airflow/pull/25594 | 84718f92334b7e43607ab617ef31f3ffc4257635 | 5f3733ea310b53a0a90c660dc94dd6e1ad5755b7 | "2022-07-15T13:35:48Z" | python | "2022-08-11T07:30:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,092 | ["airflow/providers/microsoft/mssql/hooks/mssql.py", "tests/providers/microsoft/mssql/hooks/test_mssql.py"] | MsSqlHook.get_sqlalchemy_engine uses pyodbc instead of pymssql | ### Apache Airflow Provider(s)
microsoft-mssql
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-mssql==2.0.1
### Apache Airflow version
2.2.2
### Operating System
Ubuntu 20.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
`MsSqlHook.get_sqlalchemy_engine` uses the default mssql driver: `pyodbc` instead of `pymssql`.
- If pyodbc is installed: we get `sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError)`
- Otherwise we get: `ModuleNotFoundError`
PS: Looking at the code it should still apply up to provider version 3.0.0 (lastest version).
### What you think should happen instead
The default driver used by `sqlalchemy.create_engine` for mssql is `pyodbc`.
To use `pymssql` with `create_engine` we need to have the uri start with `mssql+pymssql://` (currently the hook uses `DBApiHook.get_uri` which starts with `mssql://`.
### How to reproduce
```python
>>> from contextlib import closing
>>> from airflow.providers.microsoft.mssql.hooks.mssql import MsSqlHook
>>>
>>> hook = MsSqlHook()
>>> with closing(hook.get_sqlalchemy_engine().connect()) as c:
>>> with closing(c.execute("SELECT SUSER_SNAME()")) as res:
>>> r = res.fetchone()
```
Will raise an exception due to the wrong driver being used.
### Anything else
Demo for sqlalchemy default mssql driver choice:
```bash
# pip install sqlalchemy
... Successfully installed sqlalchemy-1.4.39
# pip install pymssql
... Successfully installed pymssql-2.2.5
```
```python
>>> from sqlalchemy import create_engine
>>> create_engine("mssql://test:pwd@test:1433")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in create_engine
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/deprecations.py", line 309, in warned
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/create.py", line 560, in create_engine
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/connectors/pyodbc.py", line 43, in dbapi
return __import__("pyodbc")
ModuleNotFoundError: No module named 'pyodbc'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25092 | https://github.com/apache/airflow/pull/25185 | a01cc5b0b8e4ce3b24970d763e4adccfb4e69f09 | df5a54d21d6991d6cae05c38e1562da2196e76aa | "2022-07-15T12:42:02Z" | python | "2022-08-05T15:41:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,090 | ["airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/timetables/base.py", "airflow/timetables/simple.py", "airflow/www/views.py", "newsfragments/25090.significant.rst"] | More natural sorting of DAG runs in the grid view | ### Apache Airflow version
2.3.2
### What happened
Dag with schedule to run once every hour.
Dag was started manually at 12:44, lets call this run 1
At 13:00 the scheduled run started, lets call this run 2. It appears before run 1 in the grid view.
See attached screenshot

### What you think should happen instead
Dags in grid view should appear in the order they are started.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow==2.3.2
apache-airflow-client==2.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.0.2
apache-airflow-providers-docker==3.0.0
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-postgres==5.0.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25090 | https://github.com/apache/airflow/pull/25633 | a1beede41fb299b215f73f987a572c34f628de36 | 36eea1c8e05a6791d144e74f4497855e35baeaac | "2022-07-15T11:16:35Z" | python | "2022-08-18T06:28:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,036 | ["airflow/example_dags/example_datasets.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Test that dataset not updated when task skipped | the AIP specifies that when a task is skipped, that we don’t mark the dataset as “updated”. we should simply add a test that verifies that this is what happens (and make changes if necessary)
@blag, i tried to make this an issue so i could assign to you but can't. anyway, can reference in PR with `closes` | https://github.com/apache/airflow/issues/25036 | https://github.com/apache/airflow/pull/25086 | 808035e00aaf59a8012c50903a09d3f50bd92ca4 | f0c9ac9da6db3a00668743adc9b55329ec567066 | "2022-07-13T19:31:16Z" | python | "2022-07-19T03:43:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 25,033 | ["airflow/models/dag.py", "airflow/www/templates/airflow/dag.html", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "tests/models/test_dag.py", "tests/www/views/test_views_base.py"] | next run should show deps fulfillment e.g. 0 of 3 | on dags page (i.e. the home page) we have a "next run" column. for dataset-driven dags, since we can't know for certain when it will be, we could instead show how many deps are fulfilled, e.g. `0 of 1` and perhaps make it a link to the datasets that the dag is dependened on.
here's a sample query that returns the dags which _are_ ready to run. but for this feature you'd need to get the num deps fulfilled and the total num deps.
```python
# these dag ids are triggered by datasets, and they are ready to go.
dataset_triggered_dag_info_list = {
x.dag_id: (x.first_event_time, x.last_event_time)
for x in session.query(
DatasetDagRef.dag_id,
func.max(DDRQ.created_at).label('last_event_time'),
func.max(DDRQ.created_at).label('first_event_time'),
)
.join(
DDRQ,
and_(
DDRQ.dataset_id == DatasetDagRef.dataset_id,
DDRQ.target_dag_id == DatasetDagRef.dag_id,
),
isouter=True,
)
.group_by(DatasetDagRef.dag_id)
.having(func.count() == func.sum(case((DDRQ.target_dag_id.is_not(None), 1), else_=0)))
.all()
}
``` | https://github.com/apache/airflow/issues/25033 | https://github.com/apache/airflow/pull/25141 | 47b72056c46931aef09d63d6d80fbdd3d9128b09 | 03a81b66de408631147f9353de6ffd3c1df45dbf | "2022-07-13T19:19:26Z" | python | "2022-07-21T18:28:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,953 | ["airflow/providers/oracle/example_dags/__init__.py", "airflow/providers/oracle/example_dags/example_oracle.py", "docs/apache-airflow-providers-oracle/index.rst", "docs/apache-airflow-providers-oracle/operators/index.rst"] | oracle hook _map_param() incorrect | ### Apache Airflow Provider(s)
oracle
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3 (latest released)
### Operating System
OEL 7.6
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
[_map_param()](https://github.com/apache/airflow/blob/main/airflow/providers/oracle/hooks/oracle.py#L36) function from Oracle hook has an incorrect check of types:
```
PARAM_TYPES = {bool, float, int, str}
def _map_param(value):
if value in PARAM_TYPES:
# In this branch, value is a Python type; calling it produces
# an instance of the type which is understood by the Oracle driver
# in the out parameter mapping mechanism.
value = value()
return value
```
`if value in PARAM_TYPES` never gets True for all the mentioned variables types:
```
PARAM_TYPES = {bool, float, int, str}
>>> "abc" in PARAM_TYPES
False
>>> 123 in PARAM_TYPES
False
>>> True in PARAM_TYPES
False
>>> float(5.5) in PARAM_TYPES
False
```
The correct condition would be `if type(value) in PARAM_TYPES`
**But**, if we only fix this condition, next in positive case (type(value) in PARAM_TYPES = True) one more issue occurs with `value = value()`
`bool`, `float`, `int` or `str` are not callable
`TypeError: 'int' object is not callable`
This line is probaby here for passing a python callable into sql statement of procedure params in tasks, is it? If so, need to correct:
`if type(value) not in PARAM_TYPES`
Here is the full fix:
```
def _map_param(value):
if type(value) not in PARAM_TYPES:
value = value()
return value
```
Next casses are tested:
```
def oracle_callable(n=123):
return n
def oracle_pass():
return 123
task1 = OracleStoredProcedureOperator( task_id='task1', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable} )
task2 = OracleStoredProcedureOperator( task_id='task2', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable()} )
task3 = OracleStoredProcedureOperator( task_id='task3', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable(456)} )
task4 = OracleStoredProcedureOperator( task_id='task4', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oacle_pass} )
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24953 | https://github.com/apache/airflow/pull/30979 | 130b6763db364426d1d794641b256d7f2ce0b93d | edebfe3f2f2c7fc2b6b345c6bc5f3a82e7d47639 | "2022-07-10T23:01:34Z" | python | "2023-05-09T18:32:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,938 | ["airflow/providers/databricks/operators/databricks.py"] | Add support for dynamic databricks connection id | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==3.0.0 # Latest
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
_No response_
### What you think should happen instead
### Motivation
In a single airflow deployment, we are looking to have the ability to support multiple databricks connections ( `databricks_conn_id`) at runtime. This can be helpful to run the same DAG against multiple testing lanes(a.k.a. different development/testing Databricks environments).
### Potential Solution
We can pass the connection id via the Airflow DAG run configuration at runtime. For this, `databricks_conn_id` is required to be a templated field.
### How to reproduce
Minor enhancement/new feature
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24938 | https://github.com/apache/airflow/pull/24945 | 7fc5e0b24a8938906ad23eaa1262c9fb74ee2df1 | 8dfe7bf5ff090a675353a49da21407dffe2fc15e | "2022-07-09T07:55:53Z" | python | "2022-07-11T14:47:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,936 | ["airflow/example_dags/example_dag_decorator.py", "airflow/example_dags/example_sla_dag.py", "airflow/models/dag.py", "docs/spelling_wordlist.txt"] | Type hints for taskflow @dag decorator | ### Description
I find no type hints when write a DAG use TaskFlowApi. `dag` and `task` decorator is a simple wrapper without detail arguments provide in docstring.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24936 | https://github.com/apache/airflow/pull/25044 | 61fc4899d71821fd051944d5d9732f7d402edf6c | be63c36bf1667c8a420d34e70e5a5efd7ca42815 | "2022-07-09T03:25:14Z" | python | "2022-07-15T01:29:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,921 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Add options to Docker Operator | ### Description
I'm trying to add options like log-opt max-size 5 and I can't.
### Use case/motivation
I'm working in Hummingbot and I would like to offer the community a system to manage multiple bots, rebalance portfolio, etc. Our system needs a terminal to execute commands so currently I'm not able to use airflow to accomplish this task.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24921 | https://github.com/apache/airflow/pull/26653 | fd27584b3dc355eaf0c0cd7a4cd65e0e580fcf6d | 19d6f54704949d017b028e644bbcf45f5b53120b | "2022-07-08T12:01:04Z" | python | "2022-09-27T14:42:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,844 | ["airflow/www/static/js/api/useGridData.test.js", "airflow/www/static/js/api/useGridData.ts"] | grid_data api keep refreshing when backfill DAG runs | ### Apache Airflow version
2.3.2 (latest released)
### What happened

### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
186-Ubuntu
### Versions of Apache Airflow Providers
2.3.2
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24844 | https://github.com/apache/airflow/pull/25042 | 38d6c28f9cf9ee4f663d068032830911f7a8e3a3 | de6938e173773d88bd741e43c7b0aa16d8a1a167 | "2022-07-05T12:09:40Z" | python | "2022-07-20T10:30:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,820 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Dag disappears when DAG tag is longer than 100 char limit | ### Apache Airflow version
2.2.5
### What happened
We added new DAG tags to a couple of our DAGs. In the case when the tag was longer than the 100 character limit the DAG was not showing in the UI and wasn't scheduled. It was however possible to reach it by typing in the URL to the DAG.
Usually when DAGs are broken there will be an error message in the UI, but this problem did not render any error message.
This problem occurred to one of our templated DAGs. Only one DAG broke and it was the one with a DAG tag which was too long. When we fixed the length, the DAG was scheduled and was visible in the UI again.
### What you think should happen instead
Exclude the dag if it is over the 100 character limit or show an error message in the UI.
### How to reproduce
Add a DAG tag which is longer than 100 characters.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Running Airflow in Kubernetes.
Syncing DAGs from S3 with https://tech.scribd.com/blog/2020/breaking-up-the-dag-repo.html
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24820 | https://github.com/apache/airflow/pull/25196 | a5cbcb56774d09b67c68f87187f2f48d6e70e5f0 | 4b28635b2085a07047c398be6cc1ac0252a691f7 | "2022-07-04T07:59:19Z" | python | "2022-07-25T13:46:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,748 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/kubernetes/kube_client.py", "tests/kubernetes/test_client.py"] | Configuring retry policy of the the kubernetes CoreV1Api ApiClient | ### Description
Can we add the option to configure the Retry policy of the kubernetes CoreV1Api? Or set it to default have some more resilient configuration.
Today it appears to retry operations 3 times but with 0 backoff in between each try. Causing temporary network glitches to result in fatal errors.
Following the flow below:
1. `airflow.kubernetes.kube_client.get_kube_client()`
Calls `load_kube_config()` without any configuration set, this assigns a default configuration with `retries=None` to `CoreV1Api.set_default()`
1b. Creates `CoreV1Api()` with `api_client=None`
1c. `ApiClient()` default constructor creates a default configuration object via `Configuration.get_default_copy(), this is the default injected above`
2. On request, through some complicated flow inside `ApiClient` and urllib3, this `configuration.retries` eventually finds its way into urllib `HTTPConnectionPool`, where if unset, it uses `urllib3.util.Retry.DEFAULT`, this has a policy of 3x retries with 0 backoff time in between.
------
Configuring the ApiClient would mean changing the `get_kube_client()` to something roughly resembling:
```
client_config = Configuration()
client_config.retries = Retry(total=3, backoff=LOAD_FROM_CONFIG)
config.load_kube_config(...., client_configuration=client_config)
apiclient = ApiClient(client_config)
return CoreV1Api(apiclient)
```
I don't know myself how fine granularity is best to expose to be configurable from airflow. The retry object has a lot of different options, so do the rest of the kubernetes client Configuration object. Maybe it should be injected from a plugin rather than config-file? Maybe urllib or kubernets library have other ways to set default config?
### Use case/motivation
Our Kubernetes API server had some unknown hickup for 10 seconds, this caused the Airflow kubernetes executor to crash, restarting airflow and then it started killing pods that were running fine, showing following log: "Reset the following 1 orphaned TaskInstances"
If the retries would have had some backoff it would have likely survived this hickup.
See attachment for the full stack trace, it's too long to include inline. Here is the most interesting parts:
```
2022-06-29 21:25:49 Class={kubernetes_executor.py:111} Level=ERROR Unknown error in KubernetesJobWatcher. Failing
...
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe35de0c70>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
2022-06-29 21:25:49 urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe315ec040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe315ec670>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
...
2022-06-29 21:25:50 Class={kubernetes_executor.py:813} Level=INFO Shutting down Kubernetes executor
...
2022-06-29 21:26:08 Class={scheduler_job.py:696} Level=INFO Starting the scheduler
...
2022-06-29 21:27:29 Class={scheduler_job.py:1285} Level=INFO Message=Reset the following 1 orphaned TaskInstances:
```
[airflowkubernetsretrycrash.log](https://github.com/apache/airflow/files/9017815/airflowkubernetsretrycrash.log)
From airflow version 2.3.2
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24748 | https://github.com/apache/airflow/pull/29809 | 440bf46ff0b417c80461cf84a68bd99d718e19a9 | dcffbb4aff090e6c7b6dc96a4a68b188424ae174 | "2022-06-30T08:27:01Z" | python | "2023-04-14T13:37:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,725 | ["airflow/www/templates/airflow/dag.html"] | Trigger DAG from templated view tab producing bad request | ### Body
Reproduced on main branch.
The bug:
When clicking Trigger DAG from templated view tab it resulted in a BAD REQUEST page however DAG run is created (it also produce the UI alert "it should start any moment now")
To compare trying to trigger DAG from log tab works as expected so the issue seems to be relevant only to to the specific tab.

### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/24725 | https://github.com/apache/airflow/pull/25729 | f24e706ff7a84fd36ea39dc3399346c357d40bd9 | 69663b245a9a67b6f05324ce7b141a1bd9b05e0a | "2022-06-29T07:06:00Z" | python | "2022-08-17T13:21:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,692 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py"] | Error for Hive Server2 Connection Document | ### What do you see as an issue?
In this Document https://airflow.apache.org/docs/apache-airflow-providers-apache-hive/stable/connections/hiveserver2.html Describe , In Extra must use the "auth_mechanism " but in the sources Code used "authMechanism".
### Solving the problem
use same words.
### Anything else
None
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24692 | https://github.com/apache/airflow/pull/24713 | 13908c2c914cf08f9d962a4d3b6ae54fbdf1d223 | cef97fccd511c8e5485df24f27b82fa3e46929d7 | "2022-06-28T01:16:20Z" | python | "2022-06-29T14:12:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,678 | ["airflow/templates.py"] | Macro prev_execution_date is always empty | ### Apache Airflow version
2.3.2 (latest released)
### What happened
The variable `prev_execution_date` is empty on the first run meaning, all usage will automatically trigger a None error.
### What you think should happen instead
A default date should be provided instead, either the DAG's `start_date` or a default `datetime.min` as during the first run, it will always trigger an error effectively preventing the DAG from running and hence, always returning an error.
### How to reproduce
Pass the variables/macros to any Task:
```
{
"execution_datetime": '{{ ts_nodash }}',
"prev_execution_datetime": '{{ prev_start_date_success | ts_nodash }}' #.strftime("%Y%m%dT%H%M%S")
}
```
Whilst the logical execution date (`execution_datetime`) works, the previous succesful logical execution date `prev_execution_datetime` automatically blows up when applying the `ts_nodash` filter. This effectively makes it impossible to use said macro ever, as it will always fail.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24678 | https://github.com/apache/airflow/pull/25593 | 1594d7706378303409590c57ab1b17910e5d09e8 | 741c20770230c83a95f74fe7ad7cc9f95329f2cc | "2022-06-27T12:59:53Z" | python | "2022-08-09T10:34:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,653 | ["airflow/operators/trigger_dagrun.py"] | Mapped TriggerDagRunOperator causes SerializationError due to operator_extra_links 'property' object is not iterable | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Hi, I have a kind of issue with launching several subDags via mapping TriggerDagRunOperator (mapping over `conf` parameter). Here is the demo example of my typical DAG:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow import XComArg
from datetime import datetime
with DAG(
'triggerer',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : list(map(lambda i: {"x": i}, list(range(10)))),
)
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=XComArg(t1))
t1 >> t2
```
But when Airflow tries to import such DAG it throws the following SerializationError (which I observed both in UI and in $AIRFLOW_HOME/logs/scheduler/latest/<my_dag_name>.py.log):
```
Broken DAG: [/home/aliona/airflow/dags/triggerer_dag.py] Traceback (most recent call last):
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 638, in _serialize_node
serialize_op['_operator_extra_links'] = cls._serialize_operator_extra_links(
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 933, in _serialize_operator_extra_links
for operator_extra_link in operator_extra_links:
TypeError: 'property' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1106, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1014, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'triggerer': 'property' object is not iterable
```
How it appears in the UI:

### What you think should happen instead
I think that TriggerDagRunOperator mapped over `conf` parameter should serialize and work well by default.
During the debugging process and trying to make everything work I found out that simple non-mapped TriggerDagRunOperator has value `['Triggered DAG']` in `operator_extra_links` field, so, it is Lisr. But as for mapped TriggerDagRunOperator, it is 'property'. I don't have any idea why during the serialization process Airflow cannot get value of this property, but I tried to reinitialize this field with `['Triggered DAG']` value and finally I fixed this issue in a such way.
For now, for every case of using mapped TriggerDagRunOperator I also use such code at the end of my dag file:
```python
# here 'second' is the name of corresponding mapped TriggerDagRunOperator task (see demo code above)
t2_patch = dag.task_dict['second']
t2_patch.operator_extra_links=['Triggered DAG']
dag.task_dict.update({'second': t2_patch})
```
So, for every mapped TriggerDagRunOperator task I manually change value of operator_extra_links property to `['Triggered DAG']` and as a result there is no any SerializationError. I have a lot of such cases, and all of them are working good with this fix, all subDags are launched, mapped configuration is passed correctly. Also I can wait for end of their execution or not, all this options also work correctly.
### How to reproduce
1. Create dag with mapped TriggerDagRunOperator tasks (main parameters such as task_id, trigger_dag_id and others are in `partial section`, in `expand` section use conf parameter with non-empty iterable value), as, for example:
```python
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=[{'x': 1}])
```
2. Try to serialize dag, and error will appear.
The full example of failing dag file:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow import XComArg
from datetime import datetime
with DAG(
'triggerer',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : list(map(lambda i: {"x": i}, list(range(10)))),
)
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=[{'a': 1}])
t1 >> t2
# uncomment these lines to fix an error
# t2_patch = dag.task_dict['second']
# t2_patch.operator_extra_links=['Triggered DAG']
# dag.task_dict.update({'second': t2_patch})
```
As subDag ('mydag') I use these DAG:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from datetime import datetime
with DAG(
'mydag',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : print("first"),
)
t2 = PythonOperator(
task_id='second',
python_callable=lambda : print("second"),
)
t1 >> t2
```
### Operating System
Ubuntu 22.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-sqlite==2.1.3
### Deployment
Virtualenv installation
### Deployment details
Python 3.10.4
pip 22.0.2
### Anything else
Currently for demonstration purposes I am using fully local Airflow installation: single node, SequentialExecutor and SQLite database backend. But such issue also appeared for multi-node installation with either CeleryExecutor or LocalExecutor and PostgreSQL database in the backend.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24653 | https://github.com/apache/airflow/pull/24676 | 48ceda22bdbee50b2d6ca24767164ce485f3c319 | 8dcafdfcdddc77fdfd2401757dcbc15bfec76d6b | "2022-06-25T14:13:29Z" | python | "2022-06-28T02:59:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,574 | ["airflow/providers/airbyte/hooks/airbyte.py", "airflow/providers/airbyte/operators/airbyte.py", "tests/providers/airbyte/hooks/test_airbyte.py"] | `AirbyteHook` add cancel job option | ### Apache Airflow Provider(s)
airbyte
### Versions of Apache Airflow Providers
I want to cancel the job if it running more than specific time . Task is getting timeout however, airbyte job was not cancelled. it seems, on kill feature has not implemented
Workaround:
Create a custom operator and implement cancel hook and call it in on kill function.
def on_kill(self):
if (self.job_id):
self.log.error('on_kill: stopping airbyte Job %s',self.job_id)
self.hook.cancel_job(self.job_id)
### Apache Airflow version
2.0.2
### Operating System
Linux
### Deployment
MWAA
### Deployment details
Airflow 2.0.2
### What happened
airbyte job was not cancelled upon timeout
### What you think should happen instead
it should cancel the job
### How to reproduce
Make sure job runs more than timeout
sync_source_destination = AirbyteTriggerSyncOperator(
task_id=f'airbyte_{key}',
airbyte_conn_id='airbyte_con',
connection_id=key,
asynchronous=False,
execution_timeout=timedelta(minutes=2)
)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24574 | https://github.com/apache/airflow/pull/24593 | 45b11d4ed1412c00ebf32a03ab5ea3a06274f208 | c118b2836f7211a0c3762cff8634b7b9a0d1cf0b | "2022-06-21T03:16:53Z" | python | "2022-06-29T06:43:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,572 | ["docs/apache-airflow-providers-snowflake/connections/snowflake.rst"] | Snowflake Provider connection documentation is misleading | ### What do you see as an issue?
Relevant page: https://airflow.apache.org/docs/apache-airflow-providers-snowflake/stable/connections/snowflake.html
## Behavior in the Airflow package
The `SnowflakeHook` object in Airflow behaves oddly compared to some other database hooks like Postgres (so extra clarity in the documentation is beneficial).
Most notably, the `SnowflakeHook` does _not_ make use of the either the `host` or `port` of the `Connection` object it consumes. It is completely pointless to specify these two fields.
When constructing the URL in a runtime context, `snowflake.sqlalchemy.URL` is used for parsing. `URL()` allows for either `account` or `host` to be specified as kwargs. Either one of these 2 kwargs will correspond with what we'd conventionally call the host in a typical URL's anatomy. However, because `SnowflakeHook` never parses `host`, any `host` defined in the Connection object would never get this far into the parsing.
## Issue with the documentation
Right now the documentation does not make clear that it is completely pointless to specify the `host`. The documentation correctly omits the port, but says that the host is optional. It does not warn the user about this field never being consumed at all by the `SnowflakeHook` ([source here](https://github.com/apache/airflow/blob/main/airflow/providers/snowflake/hooks/snowflake.py)).
This can lead to some confusion especially because the Snowflake URI consumed by `SQLAlchemy` (which many people using Snowflake will be familiar with) uses either the "account" or "host" as its host. So a user coming from SQLAlchemy may think it is fine to post the account as the "host" and skip filling in the "account" inside the extras (after all, it's "extra"), whereas that doesn't work.
I would argue that if it is correct to omit the `port` in the documentation (which it is), then `host` should also be excluded.
Furthermore, the documentation reinforces this confusion with the last few lines, where an environment variable example connection is defined that uses a host.
Finally, the documentation says "When specifying the connection in environment variable you should specify it using URI syntax", which is no longer true as of 2.3.0.
### Solving the problem
I have 3 proposals for how the documentation should be updated to better reflect how the `SnowflakeHook` actually works.
1. The `Host` option should not be listed as part of the "Configuring the Connection" section.
2. The example URI should remove the host. The new example URI would look like this: `snowflake://user:password@/db-schema?account=account&database=snow-db®ion=us-east&warehouse=snow-warehouse`. This URI with a blank host works fine; you can test this yourself:
```python
from airflow.models.connection import Connection
c = Connection(conn_id="foo", uri="snowflake://user:password@/db-schema?account=account&database=snow-db®ion=us-east&warehouse=snow-warehouse")
print(c.host)
print(c.extra_dejson)
```
3. An example should be provided of a valid Snowflake construction using the JSON. This example would not only work on its own merits of defining an environment variable connection valid for 2.3.0, but it also would highlight some of the idiosyncrasies of how Airflow defines connections to Snowflake. This would also be valuable as a reference for the AWS `SecretsManagerBackend` for when `full_url_mode` is set to `False`.
### Anything else
I wasn't sure whether to label this issue as a provider issue or documentation issue; I saw templates for either but not both.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24572 | https://github.com/apache/airflow/pull/24573 | 02d8f96bfbc43e780db0220dd7647af0c0f46093 | 2fb93f88b120777330b6ed13b24fa07df279c41e | "2022-06-21T01:41:15Z" | python | "2022-06-27T21:58:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,526 | ["docs/apache-airflow/installation/upgrading.rst", "docs/spelling_wordlist.txt"] | upgrading from 2.2.3 or 2.2.5 to 2.3.2 fails on migration-job | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Upgrade Airflow 2.2.3 or 2.2.5 -> 2.3.2 fails on migration-job.
**first time upgrade execution:**
```
Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
[SQL:
CREATE TABLE task_map (
dag_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
task_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
run_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
map_index INTEGER NOT NULL,
length INTEGER NOT NULL,
`keys` JSON,
PRIMARY KEY (dag_id, task_id, run_id, map_index),
CONSTRAINT task_map_length_not_negative CHECK (length >= 0),
CONSTRAINT task_map_task_instance_fkey FOREIGN KEY(dag_id, task_id, run_id, map_index) REFERENCES task_instance (dag_id, task_id, run_id, map_index) ON DELETE CASCADE
)
]
```
**after the first failed execution (should be due to the first failed execution):**
```
Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
[SQL: ALTER TABLE task_reschedule DROP FOREIGN KEY task_reschedule_ti_fkey[]
```
### What you think should happen instead
The migration-job shouldn't fail ;)
### How to reproduce
Everytime in my environment just need to create a snapshot from last working DB-Snapshot (Airflow Version 2.2.3)
and then deploy Airflow 2.3.2.
I can update in between to 2.2.5 but ran into the same issue by update to 2.3.2.
### Operating System
Debian GNU/Linux 10 (buster) - apache/airflow:2.3.2-python3.8 (hub.docker.com)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==2.2.0
apache-airflow-providers-docker==2.3.0
apache-airflow-providers-elasticsearch==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.2.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-microsoft-azure==3.4.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==2.4.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.3.0
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.3.0
apache-airflow-providers-tableau==2.1.4
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
- K8s Rev: v1.21.12-eks-a64ea69
- helm chart version: 1.6.0
- Database: AWS RDS MySQL 8.0.28
### Anything else
Full error Log **first** execution:
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:529: DeprecationWarning: The auth_backend option in [api[] has been renamed to auth_backends - the old setting has been used, but please update your config.
option = self._get_option_from_config_file(deprecated_key, deprecated_section, key, kwargs, section)
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:356: FutureWarning: The auth_backends setting in [api[] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
warnings.warn(
DB: mysql+mysqldb://airflow:***@test-airflow2-db-blue.fsgfsdcfds76.eu-central-1.rds.amazonaws.com:3306/airflow
Performing upgrade with database mysql+mysqldb://airflow:***@test-airflow2-db-blue.fsgfsdcfds76.eu-central-1.rds.amazonaws.com:3306/airflow
[2022-06-17 12:19:59,724[] {db.py:920} WARNING - Found 33 duplicates in table task_fail. Will attempt to move them.
[2022-06-17 12:36:18,813[] {db.py:1448} INFO - Creating tables
INFO [alembic.runtime.migration[] Context impl MySQLImpl.
INFO [alembic.runtime.migration[] Will assume non-transactional DDL.
INFO [alembic.runtime.migration[] Running upgrade be2bfac3da23 -> c381b21cb7e4, Create a ``session`` table to store web session data
INFO [alembic.runtime.migration[] Running upgrade c381b21cb7e4 -> 587bdf053233, Add index for ``dag_id`` column in ``job`` table.
INFO [alembic.runtime.migration[] Running upgrade 587bdf053233 -> 5e3ec427fdd3, Increase length of email and username in ``ab_user`` and ``ab_register_user`` table to ``256`` characters
INFO [alembic.runtime.migration[] Running upgrade 5e3ec427fdd3 -> 786e3737b18f, Add ``timetable_description`` column to DagModel for UI.
INFO [alembic.runtime.migration[] Running upgrade 786e3737b18f -> f9da662e7089, Add ``LogTemplate`` table to track changes to config values ``log_filename_template``
INFO [alembic.runtime.migration[] Running upgrade f9da662e7089 -> e655c0453f75, Add ``map_index`` column to TaskInstance to identify task-mapping,
and a ``task_map`` table to track mapping values from XCom.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (3780, "Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 82, in upgradedb
db.upgradedb(to_revision=to_revision, from_revision=from_revision, show_sql_only=args.show_sql_only)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/command.py", line 322, in upgrade
script.run_env()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/script/base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 623, in run_migrations
step.migration_fn(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/versions/0100_2_3_0_add_taskmap_and_map_id_on_taskinstance.py", line 75, in upgrade
op.create_table(
File "<string>", line 8, in create_table
File "<string>", line 3, in create_table
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/ops.py", line 1254, in create_table
return operations.invoke(op)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/base.py", line 394, in invoke
return fn(self, operation)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/toimpl.py", line 114, in create_table
operations.impl.create_table(table)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 354, in create_table
self._exec(schema.CreateTable(table))
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 195, in _exec
return conn.execute(construct, multiparams)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (3780, "Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
[SQL:
CREATE TABLE task_map (
dag_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
task_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
run_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
map_index INTEGER NOT NULL,
length INTEGER NOT NULL,
`keys` JSON,
PRIMARY KEY (dag_id, task_id, run_id, map_index),
CONSTRAINT task_map_length_not_negative CHECK (length >= 0),
CONSTRAINT task_map_task_instance_fkey FOREIGN KEY(dag_id, task_id, run_id, map_index) REFERENCES task_instance (dag_id, task_id, run_id, map_index) ON DELETE CASCADE
)
]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
```
Full error Log **after** first execution (should caused by first execution):
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:529: DeprecationWarning: The auth_backend option in [api[] has been renamed to auth_backends - the old setting has been used, but please update your config.
option = self._get_option_from_config_file(deprecated_key, deprecated_section, key, kwargs, section)
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:356: FutureWarning: The auth_backends setting in [api[] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
warnings.warn(
DB: mysql+mysqldb://airflow:***@test-airflow2-db-blue.cndbtlpttl69.eu-central-1.rds.amazonaws.com:3306/airflow
Performing upgrade with database mysql+mysqldb://airflow:***@test-airflow2-db-blue.cndbtlpttl69.eu-central-1.rds.amazonaws.com:3306/airflow
[2022-06-17 12:41:53,882[] {db.py:1448} INFO - Creating tables
INFO [alembic.runtime.migration[] Context impl MySQLImpl.
INFO [alembic.runtime.migration[] Will assume non-transactional DDL.
INFO [alembic.runtime.migration[] Running upgrade f9da662e7089 -> e655c0453f75, Add ``map_index`` column to TaskInstance to identify task-mapping,
and a ``task_map`` table to track mapping values from XCom.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1091, "Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 82, in upgradedb
db.upgradedb(to_revision=to_revision, from_revision=from_revision, show_sql_only=args.show_sql_only)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/command.py", line 322, in upgrade
script.run_env()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/script/base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 623, in run_migrations
step.migration_fn(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/versions/0100_2_3_0_add_taskmap_and_map_id_on_taskinstance.py", line 49, in upgrade
batch_op.drop_index("idx_task_reschedule_dag_task_run")
File "/usr/local/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/base.py", line 376, in batch_alter_table
impl.flush()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/batch.py", line 111, in flush
fn(*arg, **kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/mysql.py", line 155, in drop_constraint
super(MySQLImpl, self).drop_constraint(const)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 338, in drop_constraint
self._exec(schema.DropConstraint(const))
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 195, in _exec
return conn.execute(construct, multiparams)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1091, "Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
[SQL: ALTER TABLE task_reschedule DROP FOREIGN KEY task_reschedule_ti_fkey[]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24526 | https://github.com/apache/airflow/pull/25938 | 994f18872af8d2977d78e6d1a27314efbeedb886 | e2592628cb0a6a37efbacc64064dbeb239e83a50 | "2022-06-17T13:59:27Z" | python | "2022-08-25T14:15:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,487 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Dynamic mapping over KubernetesPodOperator results produces triplicate child tasks | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Attempting to use [dynamic task mapping](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#mapping-over-result-of-classic-operators) on the results of a `KubernetesPodOperator` (or `GKEStartPodOperator`) produces 3x as many downstream task instances as it should. Two-thirds of the downstream tasks fail more or less instantly.
### What you think should happen instead
The problem is that the number of downstream tasks is calculated by counting XCOMs associated with the upstream task, assuming that each `task_id` has a single XCOM:
https://github.com/apache/airflow/blob/fe5e689adfe3b2f9bcc37d3975ae1aea9b55e28a/airflow/models/mappedoperator.py#L606-L615
However the `KubernetesPodOperator` pushes two XCOMs in its `.execute()` method:
https://github.com/apache/airflow/blob/fe5e689adfe3b2f9bcc37d3975ae1aea9b55e28a/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L425-L426
So the number of downstream tasks ends up being 3x what it should.
### How to reproduce
Reproducing the behavior requires access to a kubernetes cluster, but in psedo-code, a dag like this should demonstrate the behavior:
```
with DAG(...) as dag:
# produces an output list with N elements
first_pod = GKEStartPodOperator(..., do_xcom_push=True)
# produces 1 output per input, so N task instances are created each with a single output
second_pod = GKEStartPodOperator.partial(..., do_xcom_push=True).expand(id=XComArg(first_pod))
# should have N task instances created, but actually gets 3N task instances created
third_pod = GKEStartPodOperator.partial(..., do_xcom_push=True).expand(id=XComArg(second_pod))
```
### Operating System
macOS 12.4
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==4.1.0
apache-airflow-providers-google==8.0.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
When I edit `mappedoperator.py` in my local deployment to filter on the XCOM key things behave as expected:
```
# Collect lengths from mapped upstreams.
xcom_query = (
session.query(XCom.task_id, func.count(XCom.map_index))
.group_by(XCom.task_id)
.filter(
XCom.dag_id == self.dag_id,
XCom.run_id == run_id,
XCom.key == 'return_value', <------- added this line
XCom.task_id.in_(mapped_dep_keys),
XCom.map_index >= 0,
)
)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24487 | https://github.com/apache/airflow/pull/24530 | df388a3d5364b748993e61b522d0b68ff8b8124a | a69095fea1722e153a95ef9da93b002b82a02426 | "2022-06-15T23:31:31Z" | python | "2022-07-27T08:36:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,484 | ["airflow/migrations/versions/0111_2_3_3_add_indexes_for_cascade_deletes.py", "airflow/models/taskfail.py", "airflow/models/taskreschedule.py", "airflow/models/xcom.py", "docs/apache-airflow/migrations-ref.rst"] | `airflow db clean task_instance` takes a long time | ### Apache Airflow version
2.3.1
### What happened
When I ran the `airflow db clean task_instance` command, it can take up to 9 hours to complete. The database around 3215220 rows in the `task_instance` table and 51602 rows in the `dag_run` table. The overall size of the database is around 1 TB.
I believe the issue is because of the cascade constraints on others tables as well as the lack of indexes on task_instance foreign keys.
Running delete on a small number of rows gives this shows most of the time is spent in xcom and task_fail tables
```
explain (analyze,buffers,timing) delete from task_instance t1 where t1.run_id = 'manual__2022-05-11T01:09:05.856703+00:00'; rollback;
Trigger for constraint task_reschedule_ti_fkey: time=3.208 calls=23
Trigger for constraint task_map_task_instance_fkey: time=1.848 calls=23
Trigger for constraint xcom_task_instance_fkey: time=4457.779 calls=23
Trigger for constraint rtif_ti_fkey: time=3.135 calls=23
Trigger for constraint task_fail_ti_fkey: time=1164.183 calls=23
```
I temporarily fixed it by adding these indexes.
```
create index idx_task_reschedule_dr_fkey on task_reschedule (dag_id, run_id);
create index idx_xcom_ti_fkey on xcom (dag_id, task_id, run_id, map_index);
create index idx_task_fail_ti_fkey on task_fail (dag_id, task_id, run_id, map_index);
```
### What you think should happen instead
It should not take 9 hours to complete a clean up process. Before upgrading to 2.3.x, it was taking no more than 5 minutes.
### How to reproduce
_No response_
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24484 | https://github.com/apache/airflow/pull/24488 | 127f8f4de02422ade8f2c84f84d3262d6efde185 | 677c42227c08f705142f298ab88915f133cd94e5 | "2022-06-15T21:21:18Z" | python | "2022-06-16T18:41:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,460 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | let BigQueryGetData operator take a query string and as_dict flag | ### Description
Today the BigQueryGetData airflow.providers.google.cloud.operators.bigquery.BigQueryGetDataOperator only allows you to point to a specific dataset and table and how many rows you want.
It already sets up a BigQueryHook so it very easy to implement custom query from a string as well.
It would also be very efficient to have a as_dict flag to return the result as a list of dicts.
I am not an expert in python but here is my atempt at a modification of the current code (from 8.0.0rc2)
``` python
class BigQueryGetDataOperatorX(BaseOperator):
"""
Fetches the data from a BigQuery table (alternatively fetch data for selected columns)
and returns data in a python list. The number of elements in the returned list will
be equal to the number of rows fetched. Each element in the list will again be a list
where element would represent the columns values for that row.
**Example Result**: ``[['Tony', '10'], ['Mike', '20'], ['Steve', '15']]``
.. seealso::
For more information on how to use this operator, take a look at the guide:
:ref:`howto/operator:BigQueryGetDataOperator`
.. note::
If you pass fields to ``selected_fields`` which are in different order than the
order of columns already in
BQ table, the data will still be in the order of BQ table.
For example if the BQ table has 3 columns as
``[A,B,C]`` and you pass 'B,A' in the ``selected_fields``
the data would still be of the form ``'A,B'``.
**Example**: ::
get_data = BigQueryGetDataOperator(
task_id='get_data_from_bq',
dataset_id='test_dataset',
table_id='Transaction_partitions',
max_results=100,
selected_fields='DATE',
gcp_conn_id='airflow-conn-id'
)
:param dataset_id: The dataset ID of the requested table. (templated)
:param table_id: The table ID of the requested table. (templated)
:param max_results: The maximum number of records (rows) to be fetched
from the table. (templated)
:param selected_fields: List of fields to return (comma-separated). If
unspecified, all fields are returned.
:param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param location: The location used for the operation.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
:param query: (Optional) A sql query to execute instead
:param as_dict: if True returns the result as a list of dictionaries. default to False
"""
template_fields: Sequence[str] = (
'dataset_id',
'table_id',
'max_results',
'selected_fields',
'impersonation_chain',
)
ui_color = BigQueryUIColors.QUERY.value
def __init__(
self,
*,
dataset_id: Optional[str] = None,
table_id: Optional[str] = None,
max_results: Optional[int] = 100,
selected_fields: Optional[str] = None,
gcp_conn_id: str = 'google_cloud_default',
delegate_to: Optional[str] = None,
location: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
query: Optional[str] = None,
as_dict: bool = False,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.dataset_id = dataset_id
self.table_id = table_id
self.max_results = int(max_results)
self.selected_fields = selected_fields
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.location = location
self.impersonation_chain = impersonation_chain
self.query = query
self.as_dict = as_dict
if not query and not table_id:
self.log.error('Table_id or query not set. Please provide either a dataset_id + table_id or a query string')
def execute(self, context: 'Context') -> list:
self.log.info(
'Fetching Data from %s.%s max results: %s', self.dataset_id, self.table_id, self.max_results
)
hook = BigQueryHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
location=self.location,
)
if not self.query:
if not self.selected_fields:
schema: Dict[str, list] = hook.get_schema(
dataset_id=self.dataset_id,
table_id=self.table_id,
)
if "fields" in schema:
self.selected_fields = ','.join([field["name"] for field in schema["fields"]])
with hook.list_rows(
dataset_id=self.dataset_id,
table_id=self.table_id,
max_results=self.max_results,
selected_fields=self.selected_fields
) as rows:
if self.as_dict:
table_data = [json.dumps(dict(zip(self.selected_fields, row))).encode('utf-8') for row in rows]
else:
table_data = [row.values() for row in rows]
else:
with hook.get_conn().cursor().execute(self.query) as cursor:
if self.as_dict:
table_data = [json.dumps(dict(zip(self.keys,row))).encode('utf-8') for row in cursor.fetchmany(self.max_results)]
else:
table_data = [row for row in cursor.fetchmany(self.max_results)]
self.log.info('Total extracted rows: %s', len(table_data))
return table_data
```
### Use case/motivation
This would simplify getting data from BigQuery into airflow instead of having to first store the data in a separat table with BigQueryInsertJob and then fetch that.
Also simplifies handling the data with as_dict in the same way that many other database connectors in python does.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24460 | https://github.com/apache/airflow/pull/30887 | dff7e0de362e4cd318d7c285ec102923503eceb3 | b8f73768ec13f8d4cc1605cca3fa93be6caac473 | "2022-06-15T08:33:25Z" | python | "2023-05-09T06:05:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,360 | ["airflow/providers/snowflake/transfers/s3_to_snowflake.py", "airflow/providers/snowflake/utils/__init__.py", "airflow/providers/snowflake/utils/common.py", "docs/apache-airflow-providers-snowflake/operators/s3_to_snowflake.rst", "tests/providers/snowflake/transfers/test_s3_to_snowflake.py", "tests/providers/snowflake/utils/__init__.py", "tests/providers/snowflake/utils/test_common.py", "tests/system/providers/snowflake/example_snowflake.py"] | Pattern parameter in S3ToSnowflakeOperator | ### Description
I would like to propose to add a pattern parameter to allow loading only those files that satisfy the given regex pattern.
This function is supported on the Snowflake side, it just requires passing a parameter to the COPY INTO command.
[Snowflake documentation/](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#loading-using-pattern-matching)
### Use case/motivation
I have multiple files with different schema in one folder. I would like to move to Snowflake only files which meet the given name filter, and I am not able to do it with the prefix parameter.
### Related issues
I am not aware
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24360 | https://github.com/apache/airflow/pull/24571 | 5877f45d65d5aa864941efebd2040661b6f89cb1 | 66e84001df069c76ba8bfefe15956c4018844b92 | "2022-06-09T22:13:38Z" | python | "2022-06-22T07:49:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,346 | ["airflow/utils/db.py"] | Add salesforce_default to List Connection | ### Apache Airflow version
2.1.2
### What happened
salesforce_default is not in the list of Connection.
### What you think should happen instead
Should be added salesforce_default to ListConnection.
### How to reproduce
After resetting DB, look at List Connection.
### Operating System
GCP Container
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24346 | https://github.com/apache/airflow/pull/24347 | e452949610cff67c0e0a9918a8fefa7e8cc4b8c8 | 6d69dd062f079a8fbf72563fd218017208bfe6c1 | "2022-06-09T14:56:06Z" | python | "2022-06-13T18:25:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,343 | ["airflow/providers/google/cloud/operators/bigquery.py"] | BigQueryCreateEmptyTableOperator do not deprecated bigquery_conn_id yet | ### Apache Airflow version
2.3.2 (latest released)
### What happened
`bigquery_conn_id` is deprecated for other operators like `BigQueryDeleteTableOperator`
and replaced by `gcp_conn_id` but it's not the case for `BigQueryCreateEmptyTableOperator`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24343 | https://github.com/apache/airflow/pull/24376 | dd78e29a8c858769c9c21752f319e19af7f64377 | 8e0bddaea69db4d175f03fa99951f6d82acee84d | "2022-06-09T09:19:46Z" | python | "2022-06-12T21:07:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,338 | ["airflow/exceptions.py", "airflow/models/xcom_arg.py", "tests/decorators/test_python.py"] | TaskFlow AirflowSkipException causes downstream step to fail | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Using TaskFlow API and have 2 tasks that lead to the same downstream task. These tasks check for new data and when found will set an XCom entry of the new filename for the downstream to handle. If no data is found the upstream tasks raise a skip exception.
The downstream task has the trigger_rule = none_failed_min_one_success.
Problem is that a task which is set to Skip doesn't set any XCom. When the downstream task starts it raises the error:
`airflow.exceptions.AirflowException: XComArg result from task2 at airflow_2_3_xcomarg_render_error with key="return_value" is not found!`
### What you think should happen instead
Based on trigger rule of "none_failed_min_one_success", expectation is that an upstream task should be allowed to skip and the downstream task will still run. While the downstream does try to start based on trigger rules, it never really gets to run since the error is raised when rendering the arguments.
### How to reproduce
Example dag will generate the error if run.
```
from airflow.decorators import dag, task
from airflow.exceptions import AirflowSkipException
@task
def task1():
return "example.csv"
@task
def task2():
raise AirflowSkipException()
@task(trigger_rule="none_failed_min_one_success")
def downstream_task(t1, t2):
print("task ran")
@dag(
default_args={"owner": "Airflow", "start_date": "2022-06-07"},
schedule_interval=None,
)
def airflow_2_3_xcomarg_render_error():
t1 = task1()
t2 = task2()
downstream_task(t1, t2)
example_dag = airflow_2_3_xcomarg_render_error()
```
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24338 | https://github.com/apache/airflow/pull/25661 | c7215a28f9df71c63408f758ed34253a4dfaa318 | a4e38978194ef46565bc1e5ba53ecc65308d09aa | "2022-06-08T20:07:42Z" | python | "2022-08-16T12:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,331 | ["dev/example_dags/README.md", "dev/example_dags/update_example_dags_paths.py"] | "Example DAGs" link under kubernetes-provider documentation is broken. Getting 404 | ### What do you see as an issue?
_Example DAGs_ folder is not available for _apache-airflow-providers-cncf-kubernetes_ , which results in broken link on documentation page ( https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/index.html ).
Getting 404 error when clicked on _Example DAGs_ link (https://github.com/apache/airflow/tree/main/airflow/providers/cncf/kubernetes/example_dags )
<img width="1464" alt="Screenshot 2022-06-08 at 9 01 56 PM" src="https://user-images.githubusercontent.com/11991059/172657376-8a556e9e-72e5-4aab-9c71-b1da239dbf5c.png">
<img width="1475" alt="Screenshot 2022-06-08 at 9 01 39 PM" src="https://user-images.githubusercontent.com/11991059/172657413-c72d14f2-071f-4452-baf7-0f41504a5a3a.png">
### Solving the problem
Folder named _example_dags_ should be created under link (https://github.com/apache/airflow/tree/main/airflow/providers/cncf/kubernetes/) which should include kubernetes specific DAG examples.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24331 | https://github.com/apache/airflow/pull/24348 | 74ac9f788c31512b1fcd9254282905f34cc40666 | 85c247ae10da5ee93f26352d369f794ff4f2e47c | "2022-06-08T15:33:29Z" | python | "2022-06-09T17:33:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,328 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | `TI.log_url` is incorrect with mapped tasks | ### Apache Airflow version
2.3.0
### What happened
I had an `on_failure_callback` that sent a `task_instance.log_url` to slack, it no longer behaves correctly - giving me a page with no logs rendered instead of the logs for my task.
(Example of failure, URL like: https://XYZ.astronomer.run/dhp2pmdd/log?execution_date=2022-06-05T00%3A00%3A00%2B00%3A00&task_id=create_XXX_zip_files_and_upload&dag_id=my_dag )

### What you think should happen instead
The correct behavior would be the URL:
https://XYZ.astronomer.run/dhp2pmdd/log?execution_date=2022-06-05T00%3A00%3A00%2B00%3A00&task_id=create_XXX_zip_files_and_upload&dag_id=my_dag&map_index=0
as exemplified:

### How to reproduce
_No response_
### Operating System
Debian/Docker
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24328 | https://github.com/apache/airflow/pull/24335 | a9c350762db4dca7ab5f6c0bfa0c4537d697b54c | 48a6155bb1478245c1dd8b6401e4cce00e129422 | "2022-06-08T14:44:49Z" | python | "2022-06-14T20:15:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,321 | ["airflow/providers/amazon/aws/sensors/s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"] | S3KeySensor wildcard_match only matching key prefixes instead of full patterns | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
3.4.0
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Debian GNU/Linux 10
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
For patterns like "*.zip" the S3KeySensor succeeds for all files, does not take full pattern into account i.e. the ".zip" part).
Bug introduced in https://github.com/apache/airflow/pull/22737
### What you think should happen instead
Full pattern match as in version 3.3.0 (in S3KeySensor poke()):
```
...
if self.wildcard_match:
return self.get_hook().check_for_wildcard_key(self.bucket_key, self.bucket_name)
...
```
alternatively the files obtained by `files = self.get_hook().get_file_metadata(prefix, bucket_name)` which only match the prefix should be further filtered.
### How to reproduce
create a DAG with a key sensor task containing wildcard and suffix, e.g. the following task should succeed only if any ZIP-files are available in "my-bucket", but succeeds for all instead:
`S3KeySensor(task_id="wait_for_file", bucket_name="my-bucket", bucket_key="*.zip", wildcard_match=True)`
### Anything else
Not directly part of this issue but at the same time I would suggest to include additional file attributes in method _check_key, e.g. the actual key of the files. This way more filters, e.g. exclude specific keys, could be implemented by using the check_fn.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24321 | https://github.com/apache/airflow/pull/24378 | f8e106a531d2dc502bdfe47c3f460462ab0a156d | 7fed7f31c3a895c0df08228541f955efb16fbf79 | "2022-06-08T11:44:58Z" | python | "2022-06-11T19:31:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,197 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | KubernetesPodOperator rendered template tab does not pretty print `env_vars` | ### Apache Airflow version
2.2.5
### What happened
I am using the `KubernetesPodOperator` for airflow tasks in `Airflow 2.2.5` and it doesnot render the `env_vars` in the `rendered template` in a easily human consumable format as it did in `Airflow 1.10.x`

### What you think should happen instead
The `env_vars` should be pretty printed in human legible form.
### How to reproduce
Create a task with the `KubernetesPodOperator` and check the `Rendered template` tab of the task instance.
### Operating System
Docker
### Versions of Apache Airflow Providers
2.2.5
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24197 | https://github.com/apache/airflow/pull/25850 | 1a087bca3d6ecceab96f9ab818b3b75262222d13 | db5543ef608bdd7aefdb5fefea150955d369ddf4 | "2022-06-04T20:45:01Z" | python | "2022-08-22T15:43:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,103 | ["chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_keda.py"] | Add support for KEDA HPA Config to Helm Chart | ### Description
> When managing the scale of a group of replicas using the HorizontalPodAutoscaler, it is possible that the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated. This is sometimes referred to as thrashing, or flapping.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#flapping
Sometimes clusters need to restrict the flapping of Airflow worker replicas.
KEDA supports [`advanced.horizontalPodAutoscalerConfig`](https://keda.sh/docs/1.4/concepts/scaling-deployments/).
It would be great if the users would have the option in the helm chart to configure scale down behavior.
### Use case/motivation
KEDA currently cannot set advanced options.
We want to set advanced options like `scaleDown.stabilizationWindowSeconds`, `scaleDown.policies`.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24103 | https://github.com/apache/airflow/pull/24220 | 97948ecae7fcbb7dfdfb169cfe653bd20a108def | 8639c70f187a7d5b8b4d2f432d2530f6d259eceb | "2022-06-02T10:15:04Z" | python | "2022-06-30T17:16:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,077 | ["docs/exts/exampleinclude.py"] | Fix style of example-block | ### What do you see as an issue?
Style of example-block in the document is broken.
<img width="810" alt="example-block" src="https://user-images.githubusercontent.com/12693596/171412272-70ca791b-c798-4080-83ab-e358f290ac31.png">
This problem occurs when browser width is between 1000px and 1280px.
See: https://airflow.apache.org/docs/apache-airflow-providers-http/stable/operators.html
### Solving the problem
The container class should be removed.
```html
<div class="example-block-wrapper docutils container">
^^^^^^^^^
...
</div>
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24077 | https://github.com/apache/airflow/pull/24078 | e41b5a012427b5e7eab49de702b83dba4fc2fa13 | 5087f96600f6d7cc852b91079e92d00df6a50486 | "2022-06-01T14:08:48Z" | python | "2022-06-01T17:50:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,037 | [".gitignore", "chart/.gitignore", "chart/Chart.lock", "chart/Chart.yaml", "chart/INSTALL", "chart/NOTICE", "chart/charts/postgresql-10.5.3.tgz", "scripts/ci/libraries/_kind.sh", "tests/charts/conftest.py"] | Frequent failures of helm chart tests | ### Apache Airflow version
main (development)
### What happened
We keep on getting very frequent failures of Helm Chart tests and seems that a big number of those errors are because of errors when pulling charts from bitnami for postgres:
Example here (but I saw it happening very often recently):
https://github.com/apache/airflow/runs/6666449965?check_suite_focus=true#step:9:314
```
Save error occurred: could not find : chart postgresql not found in https://charts.bitnami.com/bitnami: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Deleting newly downloaded charts, restoring pre-update state
Error: could not find : chart postgresql not found in https://charts.bitnami.com/bitnami: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Dumping logs from KinD
```
It is not only a problem for our CI but it might be similar problem for our users who want to install the chart - they might also get the same kinds of error.
I guess we should either make it more resilient to intermittent problems with bitnami charts or use another chart (or maybe even host the chart ourselves somewhere within apache infrastructure. While the postgres chart is not really needed for most "production" users, it is still a dependency of our chart and it makes our chart depend on external and apparently flaky service.
### What you think should happen instead
We should find (or host ourselves) more stable dependency or get rid of it.
### How to reproduce
Look at some recent CI builds and see that they often fail in K8S tests and more often than not the reason is missing postgresql chart.
### Operating System
any
### Versions of Apache Airflow Providers
not relevant
### Deployment
Other
### Deployment details
CI
### Anything else
Happy to make the change once we agree what's the best way :).
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24037 | https://github.com/apache/airflow/pull/24395 | 5d5976c08c867b8dbae8301f46e0c422d4dde1ed | 779571b28b4ae1906b4a23c031d30fdc87afb93e | "2022-05-31T08:08:25Z" | python | "2022-06-14T16:07:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 24,015 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator/KubernetesExecutor: Failed to adopt pod 422 | ### Apache Airflow version
2.3.0
### What happened
Here i provide steps to reproduce this.
Goal of this: to describe how to reproduce the "Failed to Adopt pod" error condition.
The DAG->step Described Below should be of type KubernetesPodOperator
NOTE: under normal operation,
(where the MAIN_AIRFLOW_POD is never recycled by k8s, we will never see this edge-case)
(it is only when the workerPod is still running, but the MAIN_AIRFLOW_POD is suddenly restarted/stopped)
(that we would see orphan->workerPods)
1] Implement a contrived-DAG, with a single step -> which is long-running (e.g. 6 minutes)
2] Deploy your airflow-2.1.4 / airfow-2.3.0 together with the contrived-DAG
3] Run your contrived-DAG.
4] in the middle of running the single-step, check via "kubectl" that your Kubernetes->workerPod has been created / running
5] while workerPod still running, do "kubectl delete pod <OF_MAIN_AIRFLOW_POD>". This will mean that the workerPod becomes an orphan.
6] the workerPod still continues to run through to completion. after which the K8S->status of the pod will be Completed, however the pod doesn't shut down itself.
7] "kubectl" start up a new <MAIN_AIRFLOW_POD> so the web-ui is running again.
8] MAIN_AIRFLOW_POD->webUi - Run your contrived-DAG again
9] while the contrived-DAG is starting/tryingToStart etc, you will see in the logs printed out "Failed to adopt pod" -> with 422 error code.
The step-9 with the error message, you will find two appearances of this error msg in the airflow-2.1.4, airflow-2.3.0 source-code.
The step-7 may also - general logging from the MAIN_APP - may also output the "Failed to adopt pod" error message also.
### What you think should happen instead
On previous versions of airflow e.g. 1.10.x, the orphan-workerPods would be adopted by the 2nd run-time of the airflowMainApp and either used to continue the same DAG and/or cleared away when complete.
This is not happening with the newer airflow 2.1.4 / 2.3.0 (presumably because the code changed), and upon the 2nd run-time of the airflowMainApp - it would seem to try to adopt-workerPod but fails at that point ("Failed to adopt pod" in the logs and hence it cannot clear away orphan pods).
Given this is an edge-case only, (i.e. we would not expect k8s to be recycling the main airflowApp/pod anyway), it doesn't seem totally urgent bug. However, the only reason for me raising this issue with yourselves is that given any k8s->namespace, in particular in PROD, over time (e.g. 1 month?) the namespace will slowly be being filled up with orphanPods and somebody would need to manually log-in to delete old pods.
### How to reproduce
Here i provide steps to reproduce this.
Goal of this: to describe how to reproduce the "Failed to Adopt pod" error condition.
The DAG->step Described Below should be of type KubernetesPodOperator
NOTE: under normal operation,
(where the MAIN_AIRFLOW_POD is never recycled by k8s, we will never see this edge-case)
(it is only when the workerPod is still running, but the MAIN_AIRFLOW_POD is suddenly restarted/stopped)
(that we would see orphan->workerPods)
1] Implement a contrived-DAG, with a single step -> which is long-running (e.g. 6 minutes)
2] Deploy your airflow-2.1.4 / airfow-2.3.0 together with the contrived-DAG
3] Run your contrived-DAG.
4] in the middle of running the single-step, check via "kubectl" that your Kubernetes->workerPod has been created / running
5] while workerPod still running, do "kubectl delete pod <OF_MAIN_AIRFLOW_POD>". This will mean that the workerPod becomes an orphan.
6] the workerPod still continues to run through to completion. after which the K8S->status of the pod will be Completed, however the pod doesn't shut down itself.
7] "kubectl" start up a new <MAIN_AIRFLOW_POD> so the web-ui is running again.
8] MAIN_AIRFLOW_POD->webUi - Run your contrived-DAG again
9] while the contrived-DAG is starting/tryingToStart etc, you will see in the logs printed out "Failed to adopt pod" -> with 422 error code.
The step-9 with the error message, you will find two appearances of this error msg in the airflow-2.1.4, airflow-2.3.0 source-code.
The step-7 may also - general logging from the MAIN_APP - may also output the "Failed to adopt pod" error message also.
### Operating System
kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
nothing special.
it (CI/CD pipeline) builds the app. using requirements.txt to pull-in all the required python dependencies (including there is a dependency for the airflow-2.1.4 / 2.3.0)
it (CI/CD pipeline) packages the app as an ECR image & then deploy directly to k8s namespace.
### Anything else
this is 100% reproducible each & every time.
i have tested this multiple times.
also - i tested this on the old airflow-1.10.x a couple of times to verify that the bug did not exist previously
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24015 | https://github.com/apache/airflow/pull/29279 | 05fb80ee9373835b2f72fad3e9976cf29aeca23b | d26dc223915c50ff58252a709bb7b33f5417dfce | "2022-05-30T07:49:27Z" | python | "2023-02-01T11:50:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,955 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Add missing parameter documentation for `KubernetesHook` and `KubernetesPodOperator` | ### Body
Currently the following modules are missing certain parameters in their docstrings. Because of this, these parameters are not captured in the [Python API docs for the Kubernetes provider](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/index.html).
- [ ] KubernetesHook: `in_cluster`, `config_file`, `cluster_context`, `client_configuration`
- [ ] KubernetesPodOperator: `env_from`, `node_selectors`, `pod_runtime_info_envs`, `configmaps`
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23955 | https://github.com/apache/airflow/pull/24054 | 203fe71b49da760968c26752957f765c4649423b | 98b4e48fbc1262f1381e7a4ca6cce31d96e6f5e9 | "2022-05-27T03:23:54Z" | python | "2022-06-06T22:20:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,954 | ["airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/operators/databricks_sql.py", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "docs/spelling_wordlist.txt"] | Add missing parameter documentation in `DatabricksSubmitRunOperator` and `DatabricksSqlOperator` | ### Body
Currently the following modules are missing certain parameters in their docstrings. Because of this, these parameters are not captured in the [Python API docs for the Databricks provider](https://airflow.apache.org/docs/apache-airflow-providers-databricks/stable/_api/airflow/providers/databricks/index.html).
- [ ] DatabricksSubmitRunOperator: `tasks`
- [ ] DatabricksSqlOperator: `do_xcom_push`
- Granted this is really part of the `BaseOperator`, but this operator specifically sets the default value to False so it would be good if this was explicitly listed for users.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23954 | https://github.com/apache/airflow/pull/24599 | 2e5737df531410d2d678d09b5d2bba5d37a06003 | 82f842ffc56817eb039f1c4f1e2c090e6941c6af | "2022-05-27T03:10:32Z" | python | "2022-07-28T15:19:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,935 | ["airflow/providers/ftp/hooks/ftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | No option to set blocksize when retrieving a file in ftphook | ### Apache Airflow version
2.0.0
### What happened
using ftphook, im trying to download a file in chunks but the deafult blocksize is 8192 and cannot be changed.
retrieve_file code is calling conn.retrbinary(f'RETR {remote_file_name}', callback) but no blocksize is passed while this function is declared:
def retrbinary(self, cmd, callback, blocksize=8192, rest=None):
### What you think should happen instead
allow passing a blocksize
### How to reproduce
_No response_
### Operating System
gcp
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23935 | https://github.com/apache/airflow/pull/24860 | 2f29bfefb59b0014ae9e5f641d3f6f46c4341518 | 64412ee867fe0918cc3b616b3fb0b72dcd88125c | "2022-05-26T12:06:34Z" | python | "2022-07-07T20:54:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,868 | ["dev/breeze/src/airflow_breeze/commands/testing_commands.py"] | Don’t show traceback on 'breeze tests' subprocess returning non-zero | ### Body
Currently, if any tests fail when `breeze tests` is run, Breeze 2 would emit a traceback pointing to the `docker-compose` subprocess call. This is due to Docker propagating the exit call of the underlying `pytest` subprocess. While it is technically correct to emit an exception, the traceback is useless in this context, and only clutters output. It may be a good idea to add a special case for this and suppress the exception.
A similar situation can be observed with `breeze shell` if you run `exit 1` in the shell.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23868 | https://github.com/apache/airflow/pull/23897 | 1bf6dded9a5dcc22238b8943028b08741e36dfe5 | d788f4b90128533b1ac3a0622a8beb695b52e2c4 | "2022-05-23T14:12:38Z" | python | "2022-05-24T20:56:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,867 | ["dev/breeze/src/airflow_breeze/commands/ci_image_commands.py", "dev/breeze/src/airflow_breeze/utils/md5_build_check.py", "images/breeze/output-commands-hash.txt"] | Don’t prompt for 'breeze build-image' | ### Body
Currently, running the (new) `breeze build-image` brings up two prompts if any of the meta files are outdated:
```
$ breeze build-image
Good version of Docker: 20.10.14.
Good version of docker-compose: 2.5.1
The following important files are modified in ./airflow since last time image was built:
* setup.py
* Dockerfile.ci
* scripts/docker/common.sh
* scripts/docker/install_additional_dependencies.sh
* scripts/docker/install_airflow.sh
* scripts/docker/install_airflow_dependencies_from_branch_tip.sh
* scripts/docker/install_from_docker_context_files.sh
Likely CI image needs rebuild
Do you want to build the image (this works best when you have good connection and can take usually from 20 seconds to few minutes depending how old your image is)?
Press y/N/q. Auto-select n in 10 seconds (add `--answer n` to avoid delay next time): y
This might take a lot of time (more than 10 minutes) even if you havea good network connection. We think you should attempt to rebase first.
But if you really, really want - you can attempt it. Are you really sure?
Press y/N/q. Auto-select n in 10 seconds (add `--answer n` to avoid delay next time): y
```
While the prompts are shown in good nature, they don’t really make sense for `build-image` since the user already gave an explicit answer by running `build-image`. They should be suppressed.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23867 | https://github.com/apache/airflow/pull/23898 | cac7ab5c4f4239b04d7800712ee841f0e6f6ab86 | 90940b529340ef7f9b8c51d5c7d9b6a848617dea | "2022-05-23T13:44:37Z" | python | "2022-05-24T16:27:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,822 | ["airflow/providers/amazon/aws/example_dags/example_dms.py", "airflow/providers/amazon/aws/operators/rds.py", "docs/apache-airflow-providers-amazon/operators/rds.rst", "tests/providers/amazon/aws/operators/test_rds.py", "tests/system/providers/amazon/aws/rds/__init__.py", "tests/system/providers/amazon/aws/rds/example_rds_instance.py"] | Add an AWS operator for Create RDS Database | ### Description
@eladkal suggested we add the operator and then incorporate it into https://github.com/apache/airflow/pull/23681. I have a little bit of a backlog right now trying to get the System Tests up and running for AWS, but if someone wants to get to it before me, it should be a pretty easy first contribution.
The required API call is documented [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.create_db_instance) and I'm happy to help with any questions and./or help review it if someone wants to take a stab at it before I get the time.
### Use case/motivation
_No response_
### Related issues
Could be used to simplify https://github.com/apache/airflow/pull/23681
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23822 | https://github.com/apache/airflow/pull/24099 | c7feb31786c7744da91d319f499d9f6015d82454 | bf727525e1fd777e51cc8bc17285f6093277fdef | "2022-05-20T01:28:34Z" | python | "2022-06-28T19:32:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,792 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Dynamic task mapping creates too many mapped instances when task pushed non-default XCom | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Excess tasks are created when using dynamic task mapping with KubernetesPodOperator, but only in certain cases which I do not understand. I have a simple working example of this where the flow is:
- One task that returns a list XCom (list of lists, since I'm partial-ing to `KubernetesPodOperator`'s `arguments`) of length 3. This looks like `[["a"], ["b"], ["c"]]`
- A `partial` from this, which is expanded on the above's result. Each resulting task has an XCom of a single element list that looks like `["d"]`. We expect the `expand` to result in 3 tasks, which it does. So far so good. Why doesn't the issue occur at this stage? No clue.
- A `partial` from the above. We expect 3 tasks in this final stage, but get 9. 3 succeed and 6 fail consistently. This 3x rule scales to as many tasks as you define in step 2 (e.g. 2 tasks in step 2 -> 6 tasks in step 3, where 4 fail)

If I recreate this using the TaskFlow API with `PythonOperator`s, I get the expected result of 1 task -> 3 tasks -> 3 tasks

Futhermore, if I attempt to look at the `Rendered Template` of the failed tasks in the `KubernetesPodOperator` implementation (first image), I consistently get `Error rendering template` and all the fields are `None`. The succeeded tasks look normal.

Since the `Rendered Template` view fails to load, I can't confirm what is actually getting provided to these failing tasks' `argument` parameter. If there's a way I can query the meta database to see this, I'd be glad to if given instruction.
### What you think should happen instead
I think this has to do with how XComs are specially handled with the `KubernetesPodOperator`. If we look at the XComs tab of the upstream task (`some-task-2` in the above images), we see that the return value specifies `pod_name` and `pod_namespace` along with `return_value`.

Whereas in the `t2` task of the TaskFlow version, it only contains `return_value`.

I haven't dug through the code to verify, but I have a strong feeling these extra values `pod_name` and `pod_namespace` are being used to generate the `OperatorPartial`/`MappedOperator` as well when they shouldn't be.
### How to reproduce
Run this DAG in a k8s context:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='test-pod-xcoms',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
cmds=['python3', '-c' 'import json;f=open("/airflow/xcom/return.json", "w");f.write(json.dumps([["a"], ["b"], ["c"]]))'],
image='python:3.9-alpine',
name='airflow-private-image-pod-1',
task_id='some-task-1',
do_xcom_push=True
)
op2 = make_partial_operator(
cmds=['python3', '-c' 'import json;f=open("/airflow/xcom/return.json", "w");f.write(json.dumps(["d"]))'],
image='python:3.9-alpine',
name='airflow-private-image-pod-2',
task_id='some-task-2',
do_xcom_push=True
)
op3 = make_partial_operator(
cmds=['echo', 'helloworld'],
image='alpine:latest',
name='airflow-private-image-pod-3',
task_id='some-task-3',
)
op3.expand(arguments=XComArg(op2.expand(arguments=XComArg(op1))))
```
For the TaskFlow version of this that works, run this DAG (doesn't have to be k8s context):
```
from datetime import datetime
from airflow.decorators import task
from airflow.models import DAG, Variable
@task
def t1():
return [[1], [2], [3]]
@task
def t2(val):
return val
@task
def t3(val):
print(val)
with DAG(dag_id='test-mapping',
schedule_interval=None,
start_date=datetime(2020, 1, 1)) as dag:
t3.partial().expand(val=t2.partial().expand(val=t1()))
```
### Operating System
MacOS 11.6.5
### Versions of Apache Airflow Providers
Relevant:
```
apache-airflow-providers-cncf-kubernetes==4.0.1
```
Full:
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Docker (Docker Desktop)
- Server Version: 20.10.13
- API Version: 1.41
- Builder: 2
Kubernetes (Docker Desktop)
- Env: docker-desktop
- Context: docker-desktop
- Cluster Name: docker-desktop
- Namespace: default
- Container Runtime: docker
- Version: v1.22.5
Helm:
- version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23792 | https://github.com/apache/airflow/pull/24530 | df388a3d5364b748993e61b522d0b68ff8b8124a | a69095fea1722e153a95ef9da93b002b82a02426 | "2022-05-19T01:02:41Z" | python | "2022-07-27T08:36:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,786 | ["airflow/www/utils.py", "airflow/www/views.py"] | DAG Loading Slow with Dynamic Tasks - Including Test Results and Benchmarking | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The web UI is slow to load the default (grid) view for DAGs when there are mapped tasks with a high number of expansions.
I did some testing with DAGs that have a variable number of tasks, along with changing the webserver resources to see how this affects the load times.
Here is a graph showing that testing. Let me know if you have any other questions about this.
<img width="719" alt="image" src="https://user-images.githubusercontent.com/89415310/169158337-ffb257ae-21bc-4c19-aaec-b29873d9fe93.png">
My findings based on what I'm seeing here:
The jump from 5->10 AUs makes a difference but 10 to 20 does not make a difference. There are diminishing returns when bumping up the webserver resources which leads me to believe that this could be a factor of database performance after the webserver is scaled to a certain point.
If we look at the graph on a log scale, it's almost perfectly linear for the 10 and 20AU lines on the plot. This leads me to believe that the time that it takes to load is a direct function of the number of task expansions that we have for a mapped task.
### What you think should happen instead
Web UI loads in a reasonable amount of time, anything less than 10 seconds would be acceptable relatively speaking with the performance that we're getting now, ideally somewhere under 2-3 second I think would be best, if possible.
### How to reproduce
```
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
}
initial_scale = 7
max_scale = 12
scaling_factor = 2
for scale in range(initial_scale, max_scale + 1):
dag_id = f"dynamic_task_mapping_{scale}"
with DAG(
dag_id=dag_id,
default_args=default_args,
catchup=False,
schedule_interval=None,
start_date=datetime(1970, 1, 1),
render_template_as_native_obj=True,
) as dag:
start = EmptyOperator(task_id="start")
mapped = PythonOperator.partial(
task_id="mapped",
python_callable=lambda m: print(m),
).expand(
op_args=[[x] for x in list(range(2**scale))]
)
end = EmptyOperator(task_id="end")
start >> mapped >> end
globals()[dag_id] = dag
```
### Operating System
Debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23786 | https://github.com/apache/airflow/pull/23813 | 86cfd1244a641a8f17c9b33a34399d9be264f556 | 7ab5ea7853df9d99f6da3ab804ffe085378fbd8a | "2022-05-18T21:23:59Z" | python | "2022-05-20T04:18:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,783 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py"] | Partial of a KubernetesPodOperator does not allow for limit_cpu and limit_memory in the resources argument | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When performing dynamic task mapping and providing Kubernetes limits to the `resources` argument, the DAG raises an import error:
```
Broken DAG: [/opt/airflow/dags/bulk_image_processing.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 287, in partial
partial_kwargs["resources"] = coerce_resources(partial_kwargs["resources"])
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 133, in coerce_resources
return Resources(**resources)
TypeError: __init__() got an unexpected keyword argument 'limit_cpu'
```
The offending code is:
```
KubernetesPodOperator.partial(
get_logs: True,
in_cluster: True,
is_delete_operator_pod: True,
namespace: settings.namespace,
resources={'limit_cpu': settings.IMAGE_PROCESSING_OPERATOR_CPU, 'limit_memory': settings.IMAGE_PROCESSING_OPERATOR_MEMORY},
service_account_name: settings.SERVICE_ACCOUNT_NAME,
startup_timeout_seconds: 600,
**kwargs,
)
```
But you can see this in any DAG utilizing a `KubernetesPodOperator.partial` where the `partial` contains the `resources` argument.
### What you think should happen instead
The `resources` argument should be taken at face value and applied to the `OperatorPartial` and subsequently the `MappedOperator`.
### How to reproduce
Try to import this DAG using Airflow 2.3.0:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='bulk_image_processing',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
arguments=['--bucket-name', f'{{{{ dag_run.conf.get("bucket", "some-fake-default") }}}}'],
cmds=['python3', 'some_entrypoint'],
image='some-image',
name='airflow-private-image-pod-1',
task_id='some-task',
do_xcom_push=True
)
op2 = make_partial_operator(
image='another-image',
name=f'airflow-private-image-pod-2',
resources={'limit_cpu': '2000m', 'limit_memory': '16Gi'},
task_id='another-task',
cmds=[
'some',
'set',
'of',
'cmds'
]
).expand(arguments=XComArg(op1))
```
### Operating System
MacOS 11.6.5
### Versions of Apache Airflow Providers
Relevant:
```
apache-airflow-providers-cncf-kubernetes==4.0.1
```
Full:
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Docker (Docker Desktop)
- Server Version: 20.10.13
- API Version: 1.41
- Builder: 2
Kubernetes (Docker Desktop)
- Env: docker-desktop
- Context: docker-desktop
- Cluster Name: docker-desktop
- Namespace: default
- Container Runtime: docker
- Version: v1.22.5
Helm:
- version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}
### Anything else
You can get around this by creating the `partial` first without calling `expand` on it, setting the resources via the `kwargs` parameter, then calling `expand`. Example:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='bulk_image_processing',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
arguments=['--bucket-name', f'{{{{ dag_run.conf.get("bucket", "some-fake-default") }}}}'],
cmds=['python3', 'some_entrypoint'],
image='some-image',
name='airflow-private-image-pod-1',
task_id='some-task',
do_xcom_push=True
)
op2 = make_partial_operator(
image='another-image',
name=f'airflow-private-image-pod-2',
task_id='another-task',
cmds=[
'some',
'set',
'of',
'cmds'
]
)
op2.kwargs['resources'] = {'limit_cpu': '2000m', 'limit_memory': '16Gi'}
op2.expand(arguments=XComArg(op1))
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23783 | https://github.com/apache/airflow/pull/24673 | 40f08900f2d1fb0d316b40dde583535a076f616b | 45f4290712f5f779e57034f81dbaab5d77d5de85 | "2022-05-18T18:46:44Z" | python | "2022-06-28T06:45:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,772 | ["airflow/www/utils.py", "airflow/www/views.py"] | New grid view in Airflow 2.3.0 has very slow performance on large DAGs relative to tree view in 2.2.5 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I upgraded a local dev deployment of Airflow from 2.2.5 to 2.3.0, then loaded the new `/dags/<dag_id>/grid` page for a few dag ids.
On a big DAG, I’m seeing 30+ second latency on the `/grid` API, followed by a 10+ second delay each time I click a green rectangle. For a smaller DAG I tried, the page was pretty snappy.
I went back to 2.2.5 and loaded the tree view for comparison, and saw that the `/tree/` endpoint on the large DAG had 9 seconds of latency, and clicking a green rectangle had instant responsiveness.
This is slow enough that it would be a blocker for my team to upgrade.
### What you think should happen instead
The grid view should be equally performant to the tree view it replaces
### How to reproduce
Generate a large DAG. Mine looks like the following:
- 900 tasks
- 150 task groups
- 25 historical runs
Compare against a small DAG, in my case:
- 200 tasks
- 36 task groups
- 25 historical runs
The large DAG is unusable, the small DAG is usable.
### Operating System
Ubuntu 20.04.3 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker-compose deployment on an EC2 instance running ubuntu.
Airflow web server is nearly stock image from `apache/airflow:2.3.0-python3.9`
### Anything else
Screenshot of load time:
<img width="1272" alt="image" src="https://user-images.githubusercontent.com/643593/168957215-74eefcb0-578e-46c9-92b8-74c4a6a20769.png">
GIF of click latency:

### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23772 | https://github.com/apache/airflow/pull/23947 | 5ab58d057abb6b1f28eb4e3fb5cec7dc9850f0b0 | 1cf483fa0c45e0110d99e37b4e45c72c6084aa97 | "2022-05-18T04:37:28Z" | python | "2022-05-26T19:53:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,733 | ["airflow/www/templates/airflow/dag.html"] | Task Instance pop-up menu - some buttons not always clickable | ### Apache Airflow version
2.3.0 (latest released)
### What happened
See recorded screencap - in the task instance pop-up menu, sometimes the top menu options aren't clickable until you move the mouse around a bit and find an area where it will allow you to click
This only seems to affect the `Instance Details`, `Rendered, Log`, and `XCom` options - but not `List Instances, all runs`, or `Filter Upstream`
https://user-images.githubusercontent.com/15913202/168657933-532f58c6-7f33-4693-80cf-26436ff78ceb.mp4
### What you think should happen instead
The entire 'bubble' for the options such as 'XCom' should always be clickable, without having to find a 'sweet spot'
### How to reproduce
I am using Astro Runtime 5.0.0 in a localhost environment
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-databricks==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
I experience this in an Astro deployment as well (not just localhost) using the same runtime 5.0.0 image
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23733 | https://github.com/apache/airflow/pull/23736 | 71e4deb1b093b7ad9320eb5eb34eca8ea440a238 | 239a9dce5b97d45620862b42fd9018fdc9d6d505 | "2022-05-16T18:28:42Z" | python | "2022-05-17T02:58:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,722 | ["airflow/providers/google/cloud/operators/cloud_sql.py", "tests/providers/google/cloud/operators/test_cloud_sql.py"] | Add fields to CLOUD_SQL_EXPORT_VALIDATION | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
I got a validation warning.
Same as #23613.
### What you think should happen instead
The following fields are not implemented in CLOUD_SQL_EXPORT_VALIDATION.
The following fields should be added to CLOUD_SQL_EXPORT_VALIDATION.
- sqlExportOptions
- mysqlExportOptions
- masterData
- csvExportOptions
- escapeCharacter
- quoteCharacter
- fieldsTerminatedBy
- linesTerminatedBy
These are all the fields that have not been added.
https://cloud.google.com/sql/docs/mysql/admin-api/rest/v1beta4/operations#exportcontext
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23722 | https://github.com/apache/airflow/pull/23724 | 9e25bc211f6f7bba1aff133d21fe3865dabda53d | 3bf9a1df38b1ccfaf965a207d047b30452df1ba5 | "2022-05-16T11:05:33Z" | python | "2022-05-16T19:16:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,705 | ["chart/templates/redis/redis-statefulset.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_annotations.py"] | Adding PodAnnotations for Redis Statefulset | ### Description
Most Airflow services come with the ability of adding annotation, a part from Redis. This feature request adds this capability into the Redis helm template as well.
### Use case/motivation
Specifically for us, annotations and labels are used to integrate Airflow with external services, such as Datadog, and without it, the integration becomes a bit more complex.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23705 | https://github.com/apache/airflow/pull/23708 | ef79a0d1c4c0a041d7ebf83b93cbb25aa3778a70 | 2af19f16a4d94e749bbf6c7c4704e02aac35fc11 | "2022-05-14T07:46:23Z" | python | "2022-07-11T21:27:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,692 | ["docs/apache-airflow/extra-packages-ref.rst"] | Conflicts with airflow constraints for airflow 2.3.0 python 3.9 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When installing airflow 2.3.0 using pip command with "all" it fails on dependency google-ads
`pip install "apache-airflow[all]==2.3.0" -c "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
> The conflict is caused by:
apache-airflow[all] 2.3.0 depends on google-ads>=15.1.1; extra == "all"
The user requested (constraint) google-ads==14.0.0
I changed the version of google-ads to 15.1.1, but then it failed on dependency databricks-sql-connector
> The conflict is caused by:
apache-airflow[all] 2.3.0 depends on databricks-sql-connector<3.0.0 and >=2.0.0; extra == "all"
The user requested (constraint) databricks-sql-connector==1.0.2
and then on different dependencies...
### What you think should happen instead
_No response_
### How to reproduce
(venv) [root@localhost]# `python -V`
Python 3.9.7
(venv) [root@localhost]# `pip install "apache-airflow[all]==2.3.0" -c "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23692 | https://github.com/apache/airflow/pull/23697 | 4afa8e3cecf1e4a2863715d14a45160034ad31a6 | 310002e44887847991b0864bbf9a921c7b11e930 | "2022-05-13T00:01:55Z" | python | "2022-05-13T11:33:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,689 | ["airflow/timetables/_cron.py", "airflow/timetables/interval.py", "airflow/timetables/trigger.py", "tests/timetables/test_interval_timetable.py"] | Data Interval wrong when manually triggering with a specific logical date | ### Apache Airflow version
2.2.5
### What happened
When I use the date picker in the “Trigger DAG w/ config” page to choose a specific logical date for some reason on a scheduled daily DAG the Data Interval Start (circled in red) is 2 days before the logical date (circled in blue), instead of the same as the logical date. And the Data Interval End is one day before the logical date. So the interval is the correct length, but on wrong days.

I encountered this with a DAG with a daily schedule which typically runs at 09:30 UTC. I am testing this in a dev environment (with catchup off) and trying to trigger a run for 2022-05-09 09:30:00. I would expect the data interval to start at that same time and the data interval end to be 1 day after.
It has nothing to do with the previous run since that was way back on 2022-04-26
### What you think should happen instead
The data interval start date should be the same as the logical date (if it is a custom logical date)
### How to reproduce
I made a sample DAG as shown below:
```python
import pendulum
from airflow.models import DAG
from airflow.operators.python import PythonOperator
def sample(data_interval_start, data_interval_end):
return "data_interval_start: {}, data_interval_end: {}".format(str(data_interval_start), str(data_interval_end))
args = {
'start_date': pendulum.datetime(2022, 3, 10, 9, 30)
}
with DAG(
dag_id='sample_data_interval_issue',
default_args=args,
schedule_interval='30 9 * * *' # 09:30 UTC
) as sample_data_interval_issue:
task = PythonOperator(
task_id='sample',
python_callable=sample
)
```
I then start it to start a scheduled DAG run (`2022-05-11, 09:30:00 UTC`), and the `data_interval_start` is the same as I expect, `2022-05-11T09:30:00+00:00`.
However, when I went to "Trigger DAG w/ config" page and in the date chooser choose `2022-05-09 09:30:00+00:00`, and then triggered that. It shows the run datetime is `2022-05-09, 09:30:00 UTC`, but the `data_interval_start` is incorrectly set to `2022-05-08T09:30:00+00:00`, 2 days before the date I choose.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23689 | https://github.com/apache/airflow/pull/22658 | 026f1bb98cd05a26075bd4e4fb68f7c3860ce8db | d991d9800e883a2109b5523ae6354738e4ac5717 | "2022-05-12T22:29:26Z" | python | "2022-08-16T13:26:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,688 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | _TaskDecorator has no __wrapped__ attribute in v2.3.0 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I run a unit test on a task which is defined using the task decorator. In the unit test, I unwrap the task decorator with the `__wrapped__` attribute, but this no longer works in v2.3.0. It works in v2.2.5.
### What you think should happen instead
I expect the wrapped function to be returned. This was what occurred in v2.2.5
When running pytest on the airflow v2.3.0 the following error is thrown:
```AttributeError: '_TaskDecorator' object has no attribute '__wrapped__'```
### How to reproduce
Here's a rough outline of the code.
A module `hello.py` contains the task definition:
```
from airflow.decorators import task
@task
def hello_airflow():
print('hello airflow')
```
and the test contains
```
from hello import hello_airflow
def test_hello_airflow():
hello_airflow.__wrapped__()
```
Then run pytest
### Operating System
Rocky Linux 8.5 (Green Obsidian)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23688 | https://github.com/apache/airflow/pull/23830 | a8445657996f52b3ac5ce40a535d9c397c204d36 | a71e4b789006b8f36cd993731a9fb7d5792fccc2 | "2022-05-12T22:12:05Z" | python | "2022-05-23T01:24:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,679 | ["airflow/config_templates/config.yml.schema.json", "airflow/configuration.py", "tests/config_templates/deprecated.cfg", "tests/config_templates/deprecated_cmd.cfg", "tests/config_templates/deprecated_secret.cfg", "tests/config_templates/empty.cfg", "tests/core/test_configuration.py", "tests/utils/test_config.py"] | exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of | ### Apache Airflow version
main (development)
### What happened
trying to run airflow tasks run command locally and force `StandardTaskRunner` to use `_start_by_exec` instead of `_start_by_fork`
```
airflow tasks run example_bash_operator also_run_this scheduled__2022-05-08T00:00:00+00:00 --job-id 237 --local --subdir /Users/ping_zhang/airlab/repos/airflow/airflow/example_dags/example_bash_operator.py -f -i
```
However, it always errors out:
see https://user-images.githubusercontent.com/8662365/168164336-a75bfac8-cb59-43a9-b9f3-0c345c5da79f.png
```
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this Traceback (most recent call last):
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/miniforge3/envs/apache-***/bin/***", line 33, in <module>
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this sys.exit(load_entry_point('apache-***', 'console_scripts', '***')())
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/__main__.py", line 38, in main
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this args.func(args)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/cli_parser.py", line 51, in command
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return func(*args, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/utils/cli.py", line 99, in wrapper
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return f(*args, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 369, in task_run
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this ti, _ = _get_ti(task, args.execution_date_or_run_id, args.map_index, pool=args.pool)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/utils/session.py", line 71, in wrapper
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return func(*args, session=session, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 152, in _get_ti
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this dag_run, dr_created = _get_dag_run(
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 112, in _get_dag_run
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this raise DagRunNotFound(
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this ***.exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of 'scheduled__2022-05-08T00:00:00+00:00' not found
[2022-05-12 12:08:33,014] {local_task_job.py:163} INFO - Task exited with return code 1
[2022-05-12 12:08:33,048] {local_task_job.py:265} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2022-05-12 12:11:30,742] {taskinstance.py:1120} INFO - Dependencies not met for <TaskInstance: example_bash_operator.also_run_this scheduled__2022-05-08T00:00:00+00:00 [running]>, dependency 'Task Instance Not Running' FAILED: Task is in the running state
[2022-05-12 12:11:30,743] {local_task_job.py:102} INFO - Task is not able to be run
```
i have checked the dag_run does exist in my db:

### What you think should happen instead
_No response_
### How to reproduce
pull the latest main branch with this commit: `7277122ae62305de19ceef33607f09cf030a3cd4`
run airflow scheduler, webserver and worker locally with `CeleryExecutor`.
### Operating System
Apple M1 Max, version: 12.2
### Versions of Apache Airflow Providers
NA
### Deployment
Other
### Deployment details
on my local mac with latest main branch, latest commit: `7277122ae62305de19ceef33607f09cf030a3cd4`
### Anything else
Python version:
Python 3.9.7
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23679 | https://github.com/apache/airflow/pull/23723 | ce8ea6691820140a0e2d9a5dad5254bc05a5a270 | 888bc2e233b1672a61433929e26b82210796fd71 | "2022-05-12T19:15:54Z" | python | "2022-05-20T14:09:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,670 | ["airflow/www/static/js/dags.js", "airflow/www/views.py", "tests/www/views/test_views_acl.py"] | Airflow 2.3.0: can't filter by owner if selected from dropdown | ### Apache Airflow version
2.3.0 (latest released)
### What happened
On a clean install of 2.3.0, whenever I try to filter by owner, if I select it from the dropdown (which correctly detects the owner's name) it returns the following error:
`DAG "ecodina" seems to be missing from DagBag.`
Webserver's log:
```
127.0.0.1 - - [12/May/2022:12:27:47 +0000] "GET /dagmodel/autocomplete?query=ecodin&status=all HTTP/1.1" 200 17 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "GET /dags/ecodina/grid?search=ecodina HTTP/1.1" 302 217 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "GET /home HTTP/1.1" 200 35774 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /blocked HTTP/1.1" 200 2 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /last_dagruns HTTP/1.1" 200 402 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /dag_stats HTTP/1.1" 200 333 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /task_stats HTTP/1.1" 200 1194 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
```
Instead, if I write the owner's name fully and avoid selecting it from the dropdown, it works as expected since it constructs the correct URL:
`my.airflow.com/home?search=ecodina`
### What you think should happen instead
The DAGs table should only show the selected owner's DAGs.
### How to reproduce
- Start the Airflow Webserver
- Connect to the Airflow webpage
- Type an owner name in the _Search DAGs_ textbox and select it from the dropdown
### Operating System
CentOS Linux 8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Installed on a conda environment, as if it was a virtualenv:
- `conda create -c conda-forge -n airflow python=3.9`
- `conda activate airflow`
- `pip install "apache-airflow[postgres]==2.3.0" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
Database: PostgreSQL 13
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23670 | https://github.com/apache/airflow/pull/23804 | 70b41e46b46e65c0446a40ab91624cb2291a5039 | 29afd35b9cfe141b668ce7ceccecdba60775a8ff | "2022-05-12T12:33:06Z" | python | "2022-05-24T13:43:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,666 | ["airflow/providers/amazon/aws/transfers/s3_to_sql.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/transfer/s3_to_sql.rst", "tests/providers/amazon/aws/transfers/test_s3_to_sql.py", "tests/system/providers/amazon/aws/example_s3_to_sql.py"] | Add transfers operator S3 to SQL / SQL to SQL | ### Description
Should we add S3 to SQL to aws transfers?
### Use case/motivation
1. After process data from spark/glue(more), we need to publish data to sql
2. Synchronize data between 2 sql databases.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23666 | https://github.com/apache/airflow/pull/29085 | e5730364b4eb5a3b30e815ca965db0f0e710edb6 | efaed34213ad4416e2f4834d0cd2f60c41814507 | "2022-05-12T09:41:35Z" | python | "2023-01-23T21:53:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,622 | ["airflow/providers/databricks/operators/databricks.py"] | DatabricksSubmitRunOperator and DatabricksRunNowOperator cannot define .json as template_ext | ### Apache Airflow version
2.2.2
### What happened
Introduced here https://github.com/apache/airflow/commit/0a2d0d1ecbb7a72677f96bc17117799ab40853e0 databricks operators now define template_ext property as `('.json',)`. This change broke a few dags we have currently as they basically define a config json file that needs to be posted to databricks. Example:
```python
DatabricksRunNowOperator(
task_id=...,
job_name=...,
python_params=["app.py", "--config", "/path/to/config/inside-docker-image.json"],
databricks_conn_id=...,
email_on_failure=...,
)
```
This snippet will make airflow to load /path/to/config/inside-docker-image.json and it is not desired.
@utkarsharma2 @potiuk can this change be reverted, please? It's causing headaches when a json file is provided as part of the dag parameters.
### What you think should happen instead
Use a more specific extension for databricks operators, like ```.json-tpl```
### How to reproduce
_No response_
### Operating System
Any
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==2.6.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23622 | https://github.com/apache/airflow/pull/23641 | 84c9f4bf70cbc2f4ba19fdc5aa88791500d4daaa | acf89510cd5a18d15c1a45e674ba0bcae9293097 | "2022-05-10T13:54:23Z" | python | "2022-06-04T21:51:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,613 | ["airflow/providers/google/cloud/example_dags/example_cloud_sql.py", "airflow/providers/google/cloud/operators/cloud_sql.py", "tests/providers/google/cloud/operators/test_cloud_sql.py"] | Add an offload option to CloudSQLExportInstanceOperator validation specification | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
I want to use serverless export to offload the export operation from the primary instance.
https://cloud.google.com/sql/docs/mysql/import-export#serverless
Used CloudSQLExportInstanceOperator with the exportContext.offload flag to perform a serverless export operation.
I got the following warning:
```
{field_validator.py:266} WARNING - The field 'exportContext.offload' is in the body, but is not specified in the validation specification '[{'name': 'fileType', 'allow_empty': False}, {'name': 'uri', 'allow_empty': False}, {'name': 'databases', 'optional': True, 'type': 'list'}, {'name': 'sqlExportOptions', 'type': 'dict', 'optional': True, 'fields': [{'name': 'tables', 'optional': True, 'type': 'list'}, {'name': 'schemaOnly', 'optional': True}]}, {'name': 'csvExportOptions', 'type': 'dict', 'optional': True, 'fields': [{'name': 'selectQuery'}]}]'. This might be because you are using newer API version and new field names defined for that version. Then the warning can be safely ignored, or you might want to upgrade the operatorto the version that supports the new API version.
```
### What you think should happen instead
I think a validation specification for `exportContext.offload` should be added.
### How to reproduce
Try to use `exportContext.offload`, as in the example below.
```python
CloudSQLExportInstanceOperator(
task_id='export_task',
project_id='some_project',
instance='cloud_sql_instance',
body={
"exportContext": {
"fileType": "csv",
"uri": "gs://my-bucket/export.csv",
"databases": ["some_db"],
"csvExportOptions": {"selectQuery": "select * from some_table limit 10"},
"offload": True
}
},
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23613 | https://github.com/apache/airflow/pull/23614 | 1bd75ddbe3b1e590e38d735757d99b43db1725d6 | 74557e41e3dcedec241ea583123d53176994cccc | "2022-05-10T07:23:07Z" | python | "2022-05-10T09:49:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,610 | ["airflow/executors/celery_kubernetes_executor.py", "airflow/executors/local_kubernetes_executor.py", "tests/executors/test_celery_kubernetes_executor.py", "tests/executors/test_local_kubernetes_executor.py"] | AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback' | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The issue started to occur after upgrading airflow from v2.2.5 to v2.3.0. The schedulers are crashing when DAG's SLA is configured. Only occurred when I used `CeleryKubernetesExecutor`. Tested on `CeleryExecutor` and it works as expected.
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 919, in _do_scheduling
self._send_dag_callbacks_to_processor(dag, callback_to_run)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1179, in _send_dag_callbacks_to_processor
self._send_sla_callbacks_to_processor(dag)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1195, in _send_sla_callbacks_to_processor
self.executor.send_callback(request)
AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback'
```
### What you think should happen instead
Work like previous version
### How to reproduce
1. Use `CeleryKubernetesExecutor`
2. Configure DAG's SLA
DAG to reproduce:
```
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Example DAG demonstrating the usage of the BashOperator."""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
DEFAULT_ARGS = {
"sla": timedelta(hours=1),
}
with DAG(
dag_id="example_bash_operator",
default_args=DEFAULT_ARGS,
schedule_interval="0 0 * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
dagrun_timeout=timedelta(minutes=60),
tags=["example", "example2"],
params={"example_key": "example_value"},
) as dag:
run_this_last = DummyOperator(
task_id="run_this_last",
)
# [START howto_operator_bash]
run_this = BashOperator(
task_id="run_after_loop",
bash_command="echo 1",
)
# [END howto_operator_bash]
run_this >> run_this_last
for i in range(3):
task = BashOperator(
task_id="runme_" + str(i),
bash_command='echo "{{ task_instance_key_str }}" && sleep 1',
)
task >> run_this
# [START howto_operator_bash_template]
also_run_this = BashOperator(
task_id="also_run_this",
bash_command='echo "run_id={{ run_id }} | dag_run={{ dag_run }}"',
)
# [END howto_operator_bash_template]
also_run_this >> run_this_last
# [START howto_operator_bash_skip]
this_will_skip = BashOperator(
task_id="this_will_skip",
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
# [END howto_operator_bash_skip]
this_will_skip >> run_this_last
if __name__ == "__main__":
dag.cli()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23610 | https://github.com/apache/airflow/pull/23617 | 60a1d9d191fb8fc01893024c897df9632ad5fbf4 | c5b72bf30c8b80b6c022055834fc7272a1a44526 | "2022-05-10T03:29:05Z" | python | "2022-05-10T17:13:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,588 | ["airflow/www/static/js/dag/details/taskInstance/taskActions/ClearInstance.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx"] | After upgrade from Airflow 2.2.4, grid disappears for some DAGs | ### Apache Airflow version
2.3.0 (latest released)
### What happened
After the upgrade from 2.2.4 to 2.3.0, some DAGs grid data seems missing and it renders the UI blank
### What you think should happen instead
When I click the grid for a specific execution date, I expect to be able to click the tasks and view the log, render jinja templating, and clear status
### How to reproduce
Run an upgrade from 2.2.4 to 2.3.0 with a huge database (we have ~750 DAGs with a minimum of 10 tasks each).
In addition, we heavily rely on XCom.
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow apache_airflow-2.3.0-py3-none-any.whl
apache-airflow-providers-amazon apache_airflow_providers_amazon-3.3.0-py3-none-any.whl
apache-airflow-providers-ftp apache_airflow_providers_ftp-2.1.2-py3-none-any.whl
apache-airflow-providers-http apache_airflow_providers_http-2.1.2-py3-none-any.whl
apache-airflow-providers-imap apache_airflow_providers_imap-2.2.3-py3-none-any.whl
apache-airflow-providers-mongo apache_airflow_providers_mongo-2.3.3-py3-none-any.whl
apache-airflow-providers-mysql apache_airflow_providers_mysql-2.2.3-py3-none-any.whl
apache-airflow-providers-pagerduty apache_airflow_providers_pagerduty-2.1.3-py3-none-any.whl
apache-airflow-providers-postgres apache_airflow_providers_postgres-4.1.0-py3-none-any.whl
apache-airflow-providers-sendgrid apache_airflow_providers_sendgrid-2.0.4-py3-none-any.whl
apache-airflow-providers-slack apache_airflow_providers_slack-4.2.3-py3-none-any.whl
apache-airflow-providers-sqlite apache_airflow_providers_sqlite-2.1.3-py3-none-any.whl
apache-airflow-providers-ssh apache_airflow_providers_ssh-2.4.3-py3-none-any.whl
apache-airflow-providers-vertica apache_airflow_providers_vertica-2.1.3-py3-none-any.whl
### Deployment
Virtualenv installation
### Deployment details
Python 3.8.10
### Anything else
For the affected DAGs, all the time
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23588 | https://github.com/apache/airflow/pull/32992 | 8bfad056d8ef481cc44288c5749fa5c54efadeaa | 943b97850a1e82e4da22e8489c4ede958a42213d | "2022-05-09T13:37:42Z" | python | "2023-08-03T08:29:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,580 | ["airflow/www/static/js/grid/AutoRefresh.jsx", "airflow/www/static/js/grid/Grid.jsx", "airflow/www/static/js/grid/Grid.test.jsx", "airflow/www/static/js/grid/Main.jsx", "airflow/www/static/js/grid/ToggleGroups.jsx", "airflow/www/static/js/grid/api/useGridData.test.jsx", "airflow/www/static/js/grid/details/index.jsx", "airflow/www/static/js/grid/index.jsx", "airflow/www/static/js/grid/renderTaskRows.jsx", "airflow/www/static/js/grid/renderTaskRows.test.jsx"] | `task_id` with `.` e.g. `hello.world` is not rendered in grid view | ### Apache Airflow version
2.3.0 (latest released)
### What happened
`task_id` with `.` e.g. `hello.world` is not rendered in grid view.
### What you think should happen instead
The task should be rendered just fine in Grid view.
### How to reproduce
```
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Example DAG demonstrating the usage of the BashOperator."""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
with DAG(
dag_id="example_bash_operator",
schedule_interval="0 0 * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
dagrun_timeout=timedelta(minutes=60),
tags=["example", "example2"],
params={"example_key": "example_value"},
) as dag:
run_this_last = DummyOperator(
task_id="run.this.last",
)
# [START howto_operator_bash]
run_this = BashOperator(
task_id="run.after.loop",
bash_command="echo 1",
)
# [END howto_operator_bash]
run_this >> run_this_last
for i in range(3):
task = BashOperator(
task_id="runme." + str(i),
bash_command='echo "{{ task_instance_key_str }}" && sleep 1',
)
task >> run_this
# [START howto_operator_bash_template]
also_run_this = BashOperator(
task_id="also.run.this",
bash_command='echo "run_id={{ run_id }} | dag_run={{ dag_run }}"',
)
# [END howto_operator_bash_template]
also_run_this >> run_this_last
# [START howto_operator_bash_skip]
this_will_skip = BashOperator(
task_id="this.will.skip",
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
# [END howto_operator_bash_skip]
this_will_skip >> run_this_last
if __name__ == "__main__":
dag.cli()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23580 | https://github.com/apache/airflow/pull/23590 | 028087b5a6e94fd98542d0e681d947979eb1011f | afdfece9372fed83602d50e2eaa365597b7d0101 | "2022-05-09T07:04:00Z" | python | "2022-05-12T19:48:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,576 | ["setup.py"] | The xmltodict 0.13.0 breaks some emr tests | ### Apache Airflow version
main (development)
### What happened
The xmltodict 0.13.0 breaks some EMR tests (this is happening in `main` currently:
Example: https://github.com/apache/airflow/runs/6343826225?check_suite_focus=true#step:9:13417
```
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_create_job_flow_extra_args: ValueError: Malformatted input
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_create_job_flow_uses_the_emr_config_to_create_a_cluster: ValueError: Malformatted input
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_get_cluster_id_by_name: ValueError: Malformatted input
```
Downgrading to 0.12.0 fixes the problem.
### What you think should happen instead
The tests should work
### How to reproduce
* Run Breeze
* Run `pytest tests/providers/amazon/aws/hooks/test_emr.py` -> observe it to succeed
* Run `pip install xmltodict==0.13.0` -> observe it being upgraded from 0.12.0
* Run `pytest tests/providers/amazon/aws/hooks/test_emr.py` -> observe it to fail with `Malformed input` error
### Operating System
Any
### Versions of Apache Airflow Providers
Latest from main
### Deployment
Other
### Deployment details
CI
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23576 | https://github.com/apache/airflow/pull/23992 | 614b2329c1603ef1e2199044e2cc9e4b7332c2e0 | eec85d397ef0ecbbe5fd679cf5790adae2ad9c9f | "2022-05-09T01:07:36Z" | python | "2022-05-28T21:58:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,557 | ["airflow/operators/python.py", "tests/operators/test_python.py"] | templates_dict, op_args, op_kwargs no longer rendered in PythonVirtualenvOperator | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Templated strings in templates_dict, op_args, op_kwargs of PythonVirtualenvOperator are no longer rendered.
### What you think should happen instead
All templated strings in templates_dict, op_args and op_kwargs must be rendered, i.e. these 3 arguments must be template_fields of PythonVirtualenvOperator, as it was in Airflow 2.2.3
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
This is due to template_fields class variable being set in PythonVirtualenvOperator
`template_fields: Sequence[str] = ('requirements',)`
that overrode class variable of PythonOperator
`template_fields = ('templates_dict', 'op_args', 'op_kwargs')`.
I read in some discussion that wanted to make requirements a template field for PythonVirtualenvOperator, but we must specify all template fields of parent class as well.
`template_fields: Sequence[str] = ('templates_dict', 'op_args', 'op_kwargs', 'requirements',)`
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23557 | https://github.com/apache/airflow/pull/23559 | 7132be2f11db24161940f57613874b4af86369c7 | 1657bd2827a3299a91ae0abbbfe4f6b80bd4cdc0 | "2022-05-07T11:49:44Z" | python | "2022-05-09T15:17:34Z" |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.