status
stringclasses
1 value
repo_name
stringclasses
13 values
repo_url
stringclasses
13 values
issue_id
int64
1
104k
updated_files
stringlengths
11
1.76k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
38
55
pull_url
stringlengths
38
53
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
closed
apache/airflow
https://github.com/apache/airflow
17,120
["airflow/cli/commands/scheduler_command.py"]
[Scheduler error] psycopg2.OperationalError: SSL SYSCALL error: Socket operation on non-socket
Hi Airflow Team, I am running the Airflow in an EC2 instance which is installed by conda-forge during the Code Deploy. After upgrading the Airflow version from 2.0.2 to >=2.1.0, I am facing an error every time when I try to start the scheduler in daemon mode using this command: ```airflow scheduler --daemon``` I took a look at a similar issue #11456 and try to fix it with Python 3.8.10 and `python-daemon` 2.3.0 but still doesn't work. The webserver is working fine, but it can't detect the scheduler. ``` Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2336, in _wrap_pool_connect return fn() File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 364, in connect return _ConnectionFairy._checkout(self) File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 809, in _checkout result = pool._dialect.do_ping(fairy.connection) File "/home/ec2-user/anaconda3/envs/airflow_env/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 575, in do_ping cursor.execute(self._dialect_specific_select_one) psycopg2.OperationalError: SSL SYSCALL error: Socket operation on non-socket ``` Relevant package version: `sqlalchemy`=1.3.23 `psycopg2`=2.8.6 `python-daemon`=2.3.0 `apache-airflow-providers-http`=2.0.0 `apache-airflow-providers-elasticsearch`=2.0.2
https://github.com/apache/airflow/issues/17120
https://github.com/apache/airflow/pull/17157
b8abf1425004410ba8ca37385d294c650a2a7e06
e8fc3acfd9884312669c1d85b71f42a9aab29cf8
"2021-07-21T01:48:46Z"
python
"2021-08-01T18:45:01Z"
closed
apache/airflow
https://github.com/apache/airflow
17,111
["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/ads/hooks/ads.py", "docs/apache-airflow-providers-google/index.rst", "setup.py", "tests/providers/google/ads/operators/test_ads.py"]
apache-airflow-providers-google: google-ads-12.0.0
Hey team, I was looking to use the google ads hook but it seems like the google ads package is a bit out of date with the hook only taking "v5", "v4", "v3", "v2" https://developers.google.com/google-ads/api/docs/release-notes and all of those being deprecated. Is there any chance the provider can be upgraded to include this? here is the release note of https://github.com/googleads/google-ads-python/releases/tag/12.0.0 google's 12.0.0 release which also deprecated v5 **Apache Airflow version**: 2.0.1 **What happened**: deprecated API endpoint, need to update google ads to version 12.0.0 **What you expected to happen**: return query data, instead, I get an error returned from the google ads v5 API: **How to reproduce it**: attempt to hit the v5 API endpoint **Anything else we need to know**: error is below ``` Response ------- Headers: { "google.ads.googleads.v5.errors.googleadsfailure-bin": "\nJ\n\u0002\b\u0001\u0012D Version v5 is deprecated. Requests to this version will be blocked.", "grpc-status-details-bin": "\b\u0003\u0012%Request contains an invalid argument.\u001a\u0001\nCtype.googleapis.com/google.ads.googleads.v5.errors.GoogleAdsFailure\u0012L\nJ\n\u0002\b\u0001\u0012D Version v5 is deprecated. Requests to this version will be blocked.", "request-id": "JyFZ9zysaqJbiCr_PX8SLA" } Fault: errors { error_code { request_error: UNKNOWN } message: " Version v5 is deprecated. Requests to this version will be blocked." } ```
https://github.com/apache/airflow/issues/17111
https://github.com/apache/airflow/pull/17160
966b2501995279b7b5f2e1d0bf1c63a511dd382e
5d2224795b3548516311025d5549094a9b168f3b
"2021-07-20T15:47:13Z"
python
"2021-07-25T20:55:49Z"
closed
apache/airflow
https://github.com/apache/airflow
17,083
["airflow/models/baseoperator.py", "docs/spelling_wordlist.txt", "tests/models/test_baseoperator.py"]
Update chain() to support Labels
**Description** The `airflow.models.baseoperator.chain()` is a very useful and convenient way to add sequential task dependencies in DAGs. This function has [recently been updated](https://github.com/apache/airflow/issues/16635) to support `BaseOperator` and `XComArgs` but should also be able to support `Labels` as well. **Use case / motivation** Users who create tasks via the `@task` decorator will not be able to use the `chain()` function to apply sequential dependencies that do not share an `XComArg` implicit dependency with a `Label`. This use case can occur when attempting to chain multiple branch labels and the next sequential task. With the new update (yet to be released), users will receive the following exception when attempting to chain an `XComArg` and `Label`: ```bash TypeError: Chain not supported between instances of <class 'airflow.utils.edgemodifier.EdgeModifier'> and <class 'airflow.models.xcom_arg.XComArg'> ``` **Are you willing to submit a PR?** Absolutely. 🚀 **Related Issues** None
https://github.com/apache/airflow/issues/17083
https://github.com/apache/airflow/pull/17099
01a0aca249eeaf71d182bf537b9d04121257ac09
29d8e7f50b6e946a6b6561cad99620e00a2c8360
"2021-07-19T14:23:55Z"
python
"2021-07-25T16:20:09Z"
closed
apache/airflow
https://github.com/apache/airflow
17,047
["airflow/www/static/js/dag_code.js"]
Toggle Wrap on DAG code page is broken
**Apache Airflow version**: `apache/airflow:2.1.2-python3.9` and `2.1.0-python3.8` **Environment**: - **Cloud provider or hardware configuration**: Docker for Windows, AWS ECS - **OS** (e.g. from /etc/os-release): Windows 10, AWS ECS Fargate - **Install tools**: docker compose, ECS - **Others**: Web browsers: tested this on Chrome and Brave. **What happened**: The `Toggle Wrap` button on the DAG code page is not working. **What you expected to happen**: It should toggle between wrapped/unwrapped code blocks. **How to reproduce it**: 1. Spin up an airflow environment using the official docker compose file with DAG examples enabled. 2. Open code page for any DAG that uses the [START xyz] [END xyz] blocks in its source code. 3. Click on the `Toggle Wrap` button in the top right corner of the code. ![airflow@chrome](https://user-images.githubusercontent.com/6844101/125958316-271e6fcf-2be3-45e4-9bf7-837294eee9da.png) **Additional remarks** This feature seems to be working totally fine on the TI logs, and by looking at the code they are re-using the same function.
https://github.com/apache/airflow/issues/17047
https://github.com/apache/airflow/pull/19211
eace4102b68e4964b47f2d8c555f65ceaf0a3690
a1632edac783878cb82d9099f4f973c9a10b0d0f
"2021-07-16T13:51:45Z"
python
"2021-11-03T14:19:31Z"
closed
apache/airflow
https://github.com/apache/airflow
17,037
["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"]
Status of testing Providers that were prepared on July 15, 2021
have a kind request for all the contributors to the latest provider packages release. Could you help us to test the RC versions of the providers and let us know in the comment, if the issue is addressed there. ## Providers that need testing Those are providers that require testing as there were some substantial changes introduced: ### Provider [amazon: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-amazon/2.1.0rc1) - [ ] [Allow attaching to previously launched task in ECSOperator (#16685)](https://github.com/apache/airflow/pull/16685): @pmalafosse - [ ] [Update AWS Base hook to use refreshable credentials (#16770) (#16771)](https://github.com/apache/airflow/pull/16771): @baolsen - [x] [Added select_query to the templated fields in RedshiftToS3Operator (#16767)](https://github.com/apache/airflow/pull/16767): @hewe - [ ] [AWS Hook - allow IDP HTTP retry (#12639) (#16612)](https://github.com/apache/airflow/pull/16612): @baolsen - [ ] [Update Boto3 API calls in ECSOperator (#16050)](https://github.com/apache/airflow/pull/16050): @scottypate - [ ] [AWS DataSync Operator does not cancel task on Exception (#11011)](https://github.com/apache/airflow/issues/11011): @baolsen - [ ] [Fix wrong template_fields_renderers for AWS operators (#16820)](https://github.com/apache/airflow/pull/16820): @codenamestif - [ ] [AWS DataSync cancel task on exception (#11011) (#16589)](https://github.com/apache/airflow/pull/16589): @baolsen ### Provider [apache.hive: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-apache-hive/2.0.1rc1) - [ ] [Add python 3.9 (#15515)](https://github.com/apache/airflow/pull/15515): @potiuk ### Provider [apache.sqoop: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-apache-sqoop/2.0.1rc1) - [x] [Fix Minor Bugs in Apache Sqoop Hook and Operator (#16350)](https://github.com/apache/airflow/pull/16350): @ciancolo ### Provider [cncf.kubernetes: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-cncf-kubernetes/2.0.1rc1) - [x] [BugFix: Using `json` string in template_field fails with K8s Operators (#16930)](https://github.com/apache/airflow/pull/16930): @kaxil ### ~Provider [docker: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-docker/2.1.0rc1)~ ~- [ ] [Adds option to disable mounting temporary folder in DockerOperator (#16932)(https://github.com/apache/airflow/pull/16932): @potiuk: bug found.~ ### Provider [google: 4.1.0rc1](https://pypi.org/project/apache-airflow-providers-google/4.1.0rc1) - [ ] [Standardise dataproc location param to region (#16034)](https://github.com/apache/airflow/pull/16034): @Daniel-Han-Yang - [ ] [Update alias for field_mask in Google Memmcache (#16975)](https://github.com/apache/airflow/pull/16975): @potiuk ### Provider [jenkins: 2.0.1rc1](https://pypi.org/project/apache-airflow-providers-jenkins/2.0.1rc1) - [ ] [Fixed to check number key from jenkins response (#16963)](https://github.com/apache/airflow/pull/16963): @namjals ### Provider [microsoft.azure: 3.1.0rc1](https://pypi.org/project/apache-airflow-providers-microsoft-azure/3.1.0rc1) - [ ] [Add support for managed identity in WASB hook (#16628)](https://github.com/apache/airflow/pull/16628): @malthe - [ ] [WASB hook: reduce log messages for happy path (#16626)](https://github.com/apache/airflow/pull/16626): @malthe - [ ] [Fix multiple issues in Microsoft AzureContainerInstancesOperator (#15634)](https://github.com/apache/airflow/pull/15634): @BKronenbitter ### ~Provider [mysql: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-mysql/2.1.0rc1): Marking for RC2 release~ ~- [ ] [Added template_fields_renderers for MySQL Operator (#16914)](https://github.com/apache/airflow/pull/16914): @oyarushe~ ~- [ ] [Extended template_fields_renderers for MySQL provider (#16987)](https://github.com/apache/airflow/pull/16987):~ @oyarushe ### Provider [postgres: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-postgres/2.1.0rc1) - [ ] [Add schema override in DbApiHook (#16521)](https://github.com/apache/airflow/pull/16521): @LukeHong ### Provider [sftp: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-sftp/2.1.0rc1) - [ ] [Add support for non-RSA type client host key (#16314)](https://github.com/apache/airflow/pull/16314): @malthe ### Provider [snowflake: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-snowflake/2.1.0rc1) - [x] [Adding: Snowflake Role in snowflake provider hook (#16735)](https://github.com/apache/airflow/pull/16735): @saurasingh ### Provider [ssh: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-ssh/2.1.0rc1) - [ ] [Add support for non-RSA type client host key (#16314)](https://github.com/apache/airflow/pull/16314): @malthe - [ ] [SSHHook: Using correct hostname for host_key when using non-default ssh port (#15964)](https://github.com/apache/airflow/pull/15964): @freget - [ ] [Correctly load openssh-gerenated private keys in SSHHook (#16756)](https://github.com/apache/airflow/pull/16756): @ashb ### Provider [tableau: 2.1.0rc1](https://pypi.org/project/apache-airflow-providers-tableau/2.1.0rc1) - [ ] [Allow disable SSL for TableauHook (#16365)](https://github.com/apache/airflow/pull/16365): @ciancolo - [ ] [Deprecate Tableau personal token authentication (#16916)](https://github.com/apache/airflow/pull/16916): @samgans ## New Providers - [x] [apache.drill: 1.0.0rc1](https://pypi.org/project/apache-airflow-providers-apache-drill/1.0.0rc1) @dzamo
https://github.com/apache/airflow/issues/17037
https://github.com/apache/airflow/pull/17061
16564cad6f2956ecb842455d9d6a6255f8d3d817
b076ac5925e1a316dd6e9ad8ee4d1a2223e376ca
"2021-07-15T18:10:18Z"
python
"2021-07-18T13:15:10Z"
closed
apache/airflow
https://github.com/apache/airflow
17,032
["airflow/providers/google/cloud/operators/bigquery.py", "airflow/www/views.py", "docs/apache-airflow/howto/custom-operator.rst", "docs/apache-airflow/img/template_field_renderer_path.png", "tests/www/views/test_views.py"]
Improved SQL rendering within BigQueryInsertJobOperator
**Description** `BigQueryInsertJobOperator` requires the submission of a `configuration` parameter in the form of a dict. Unfortunately, if this contains a large SQL query - especially one that is formatted with new lines - then this cannot currently be rendered very nicely in the UI. <img width="1670" alt="Screenshot 2021-07-15 at 15 39 33" src="https://user-images.githubusercontent.com/967119/125806943-839d57a9-d4a0-492d-b130-06432b095239.png"> **Use case / motivation** The issue with this is that it's impossible to copy and paste the rendered query out of the Airflow UI, into a BigQuery browser and run it without lots of manual edits which is time wasted when troubleshooting problems. **Are you willing to submit a PR?** Yes. My current thought process around this would be to add an optional SQL parameter to the operator which, if provided, would be added into the configuration and could therefore have its own template field and SQL renderer. e.g. <img width="1570" alt="Screenshot 2021-07-14 at 14 18 09" src="https://user-images.githubusercontent.com/967119/125808200-5f30b8f4-4def-48a7-8223-82afdc65c973.png">
https://github.com/apache/airflow/issues/17032
https://github.com/apache/airflow/pull/17321
97428efc41e5902183827fb9e4e56d067ca771df
67cbb0f181f806edb16ca12fb7a2638b5f31eb58
"2021-07-15T14:49:02Z"
python
"2021-08-02T14:44:27Z"
closed
apache/airflow
https://github.com/apache/airflow
17,031
["airflow/providers/yandex/example_dags/example_yandexcloud_dataproc.py", "airflow/providers/yandex/hooks/yandex.py", "airflow/providers/yandex/hooks/yandexcloud_dataproc.py", "airflow/providers/yandex/operators/yandexcloud_dataproc.py", "docs/spelling_wordlist.txt", "setup.py", "tests/providers/yandex/hooks/test_yandexcloud_dataproc.py", "tests/providers/yandex/operators/test_yandexcloud_dataproc.py"]
Add autoscaling support to yandexcloud operator
* and stop setting default values in python operator code, so defaults can be set at the server side. This issue is just for the PR and questions from maintainers.
https://github.com/apache/airflow/issues/17031
https://github.com/apache/airflow/pull/17033
0e6e04e5f80eaf186d28ac62d4178e971ccf32bc
e3089dd5d045cf6daf8f15033a4cc879db0df5b5
"2021-07-15T14:44:00Z"
python
"2021-08-02T11:06:22Z"
closed
apache/airflow
https://github.com/apache/airflow
17,014
["airflow/models/baseoperator.py", "tests/models/test_baseoperator.py"]
Changes to BaseOperatorMeta breaks __init_subclass__
**Apache Airflow version**: 2.1.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.19.8 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): Amazon Linux EKS - **Kernel** (e.g. `uname -a`): - **Install tools**: Helm (community chart) - **Others**: **What happened**: With the addition of `__new__` to the `BaseOperatorMeta` class, this breaks our usage of operators that allow configuration through `__init_subclass__` arguments. Relevant python bug: https://bugs.python.org/issue29581 **What you expected to happen**: We should be able to use `__init_subclass__` in operators as we used to be able to. We relied on this behavior to customize some of our subclasses. This is working in 2.0.2 (though might exactly be considered a regression). **How to reproduce it**: This is a broken example ```py3 import datetime from airflow import DAG from airflow.operators.bash import BashOperator class PrefixedBashOperator(BashOperator): def __init_subclass__(cls, command: str = None, **kwargs): if command is not None: cls._command = command super().__init_subclass__(**kwargs) def __init__(self, bash_command, **kwargs): super().__init__(bash_command=self._command + ' ' + bash_command, **kwargs) class EchoOperator(PrefixedBashOperator, command='echo'): pass with DAG(dag_id='foo', start_date=datetime.datetime(2021, 7, 1)) as dag: EchoOperator(task_id='foo', bash_command='-e "from airflow"', dag=dag) ``` This results in error: ``` TypeError: __new__() got an unexpected keyword argument 'command' ``` **Anything else we need to know**: This example works. This shows that all that is needed to fix the issue is add `**kwargs` to `__new__` ```py3 import datetime from abc import ABCMeta from airflow import DAG from airflow.models.baseoperator import BaseOperatorMeta from airflow.operators.bash import BashOperator class NewBaseOperatorMeta(BaseOperatorMeta): def __new__(cls, name, bases, namespace, **kwargs): new_cls = super(ABCMeta, cls).__new__(cls, name, bases, namespace, **kwargs) new_cls.__init__ = cls._apply_defaults(new_cls.__init__) # type: ignore return new_cls class PrefixedBashOperator(BashOperator, metaclass=NewBaseOperatorMeta): def __init_subclass__(cls, command: str = None, **kwargs): if command is not None: cls._command = command super().__init_subclass__(**kwargs) def __init__(self, bash_command, **kwargs): super().__init__(bash_command=self._command + ' ' + bash_command, **kwargs) class EchoOperator(PrefixedBashOperator, command='echo'): pass with DAG(dag_id='foo', start_date=datetime.datetime(2021, 7, 1)) as dag: EchoOperator(task_id='foo', bash_command='-e "from airflow"', dag=dag) ```
https://github.com/apache/airflow/issues/17014
https://github.com/apache/airflow/pull/17027
34478c26d7de1328797e03bbf96d8261796fccbb
901513203f287d4f8152f028e9070a2dec73ad74
"2021-07-15T05:27:46Z"
python
"2021-07-22T22:23:51Z"
closed
apache/airflow
https://github.com/apache/airflow
17,005
["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"]
`retry_exponential_backoff` algorithm does not account for case when `retry_delay` is zero
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.1.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: When `retry_exponential_backoff` is enabled and `retry_interval` is inadvertently set to zero, a divide by zero error occurs in the `modded_hash` calculation of the exponential backoff algorithm, causing the scheduler to crash. <!-- (please include exact error messages if you can) --> **What you expected to happen**: Scheduler should treat it as a task with `retry_delay` of zero. <!-- What do you think went wrong? --> **How to reproduce it**: Create a task with `retry_delay=timedelta()` and `retry_exponential_backoff=True`. <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: Willing to submit a PR; opened #17003 (WIP) with possible fix. <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/17005
https://github.com/apache/airflow/pull/17003
0199c5d51aa7d34b7e3e8e6aad73ab80b6018e8b
6e2a3174dfff2e396c38be0415df55cfe0d76f45
"2021-07-14T20:23:09Z"
python
"2021-09-30T07:32:46Z"
closed
apache/airflow
https://github.com/apache/airflow
16,992
["airflow/kubernetes/kubernetes_helper_functions.py", "tests/executors/test_kubernetes_executor.py"]
Pod fails to run when task or dag name contains non ASCII characters (k8s executor)
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.20.5 **Environment**: - **Cloud provider or hardware configuration**: Azure AKS **What happened**: When task or dag name is defined with a non ascii character, pod creations fails with a kubernetes.client.rest.ApiException: (422) (task maintains scheduled status) this is because the hostname defined for the pod is based on dag and task name... **What you expected to happen**: Create the pod and run the task **How to reproduce it**: Run a task with name "campaña" on K8s executor, Error log ``` {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod \"dagnamecampaña.0fb696e661e347968216df454b41b56f\" is invalid: metadata.name: Invalid value: \"dagnamecampaña.0fb696e661e347968216df454b41b56f\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0 -9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","reason":"Invalid" ```
https://github.com/apache/airflow/issues/16992
https://github.com/apache/airflow/pull/17057
1a0730a08f2d72cd71447b6d6549ec10d266dd6a
a4af964c1ad2c419ef51cd9d717f5aac7ed60b39
"2021-07-14T15:13:59Z"
python
"2021-07-19T19:45:53Z"
closed
apache/airflow
https://github.com/apache/airflow
16,976
["airflow/www/views.py"]
Rendered Templates - py renderer doesn't work for list or dict
**Apache Airflow version**: 2.1.1 **What happened**: The rendered templates screen doesn't show the operator arguments for TaskFlow API operators. ![image](https://user-images.githubusercontent.com/72943478/125486179-f6dde0be-9f60-4aca-831a-5cfd16e7f006.png) **What you expected to happen**: Should show the arguments after any templates have been rendered. **How to reproduce it**: Will happen for any `@task` decorated operator. **Anything else we need to know**: This issue appears on a number of operators. Possible solution is to modify the "get_python_source" in utils/code_utils.py to handle list and dict. This would cause any operator that using py as the renderer to handle lists and dicts. Possibly something like: ``` if isinstance(x, list): return [str(v) for v in x] if isinstance(x, dict): return {k: str(v) for k, v in x.items()} ``` The converting values to strings seems necessary in order to avoid errors when this is passed to the pygments lexer. ![image](https://user-images.githubusercontent.com/72943478/125494716-01cf0c11-9e5f-45fa-afe2-4f99409edefc.png)
https://github.com/apache/airflow/issues/16976
https://github.com/apache/airflow/pull/17082
636625fdb99e6b7beb1375c5df52b06c09e6bafb
1a0730a08f2d72cd71447b6d6549ec10d266dd6a
"2021-07-13T17:02:00Z"
python
"2021-07-19T19:22:42Z"
closed
apache/airflow
https://github.com/apache/airflow
16,951
["airflow/models/baseoperator.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "airflow/utils/trigger_rule.py", "docs/apache-airflow/concepts/dags.rst", "tests/ti_deps/deps/test_trigger_rule_dep.py", "tests/utils/test_trigger_rule.py"]
add all_skipped trigger rule
I have use cases where I want to run tasks if all direct upstream tasks are skipped. The `all_done` trigger rule isn't enough for this use case.
https://github.com/apache/airflow/issues/16951
https://github.com/apache/airflow/pull/21662
f0b6398dd642dfb75c1393e8c3c88682794d152c
537c24433014d3d991713202df9c907e0f114d5d
"2021-07-12T18:32:17Z"
python
"2022-02-26T21:42:20Z"
closed
apache/airflow
https://github.com/apache/airflow
16,944
["Dockerfile"]
Include `Authlib` in Airflow Docker images
**Description** `Authlib` is required for FAB authentication. Currently, it's not included in the Airflow images and must be pip installed separately. It's a small package supporting core functionality (Webserver UI authentication), hence would make sense to include.
https://github.com/apache/airflow/issues/16944
https://github.com/apache/airflow/pull/17093
d268016a7a6ff4b65079f1dea080ead02aea99bb
3234527284ce01db67ba22c544f71ddaf28fa27e
"2021-07-12T15:40:58Z"
python
"2021-07-19T23:12:03Z"
closed
apache/airflow
https://github.com/apache/airflow
16,939
["Dockerfile", "scripts/docker/compile_www_assets.sh"]
UI is broken for `breeze kind-cluster deploy`
Using `breeze kind-cluster deploy` to deploy airflow in Kubernetes cluster for development results in unusable UI **Apache Airflow version**: main **How to reproduce it**: Start kind cluster with `./breeze kind-cluster start` Deploy airflow with `./breeze kind-cluster deploy` Check the UI and see that it's broken: ![airflowui](https://user-images.githubusercontent.com/4122866/125270717-da304100-e301-11eb-862d-0526ffe7fad2.PNG) **Anything else we need to know**: This is likely as a result of https://github.com/apache/airflow/pull/16577
https://github.com/apache/airflow/issues/16939
https://github.com/apache/airflow/pull/17086
bb1d79cb81c5a5a80f97ab4fecfa7db7a52c7b4b
660027f65d5333368aad7f16d3c927b9615e60ac
"2021-07-12T10:11:53Z"
python
"2021-07-19T17:52:15Z"
closed
apache/airflow
https://github.com/apache/airflow
16,922
["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py"]
Using the string ".json" in a dag makes KubernetesPodOperator worker unable to trigger the pod
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.1.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): ``` Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:11:29Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.21) and server (1.18) exceeds the supported minor version skew of +/-1 ``` **Environment**: - **Cloud provider or hardware configuration**: Scalway Kubernetes Kapsule - **OS** (e.g. from /etc/os-release): macOS - **Kernel** (e.g. `uname -a`): Darwin Louisons-MacBook-Pro.local 20.5.0 Darwin Kernel Version 20.5.0: Sat May 8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64 x86_64 - **Install tools**: - **Others**: **What happened**: While trying to write a simple dag with KubernetesPodExecutor, I noticed that in certain cases, the pod is launched but not always. By investigating a bit more, I found that when the string `".json"` is present in parameters of the KubernetesPodOperator, it will not work. I tried to set up a minimal example to reproduce the bug. I manage to reproduce the bug on my kubernetes cluster and my Airflow instance (if it can help) ```python import datetime import airflow from airflow.utils.dates import days_ago from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import \ KubernetesPodOperator DAG_NAME = "trigger_test" default_args = { "owner": "Rapsodie Data", "depends_on_past": False, "wait_for_downstream": False, "email": [""], "email_on_failure": False, "email_on_retry": False, "retries": 0, "retry_delay": datetime.timedelta(minutes=20), } with airflow.DAG( "michel", catchup=False, default_args=default_args, start_date=days_ago(1), schedule_interval="*/10 * * * *", ) as dag: kubernetes_min_pod_json = KubernetesPodOperator( # The ID specified for the task. task_id='pod-ex-minimum_json', name='pod-ex-minimum_json', cmds=['echo'], namespace='default', arguments=["vivi.json"], image='gcr.io/gcp-runtimes/ubuntu_18_0_4' ) kubernetes_min_pod_txt = KubernetesPodOperator( # The ID specified for the task. task_id='pod-ex-minimum_txt', name='pod-ex-minimum_txt', cmds=['echo'], namespace='default', arguments=["vivi.txt"], image='gcr.io/gcp-runtimes/ubuntu_18_0_4' ) kubernetes_min_pod_json kubernetes_min_pod_txt ``` No error message or log to give here. Here is the logs of the scheduler while trying to execute one run: ``` [2021-07-10 14:30:49,356] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d had an event of type MODIFIED [2021-07-10 14:30:49,356] {kubernetes_executor.py:205} INFO - Event: michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d Succeeded [2021-07-10 14:30:49,996] {kubernetes_executor.py:368} INFO - Attempting to finish pod; pod_id: michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d; state: None; annotations: {'dag_id': 'michel', 'task_id': 'pod-ex-minimum_txt', 'execution_date': '2021-07-10T14:20:00+00:00', 'try_number': '1'} [2021-07-10 14:30:50,004] {kubernetes_executor.py:546} INFO - Changing state of (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 20, tzinfo=tzlocal()), try_number=1), None, 'michelpodexminimumtxt.36d56ddf03e544669100f7a99657db6d', 'default', '56653001583') to None [2021-07-10 14:30:50,006] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_txt execution_date=2021-07-10 14:20:00+00:00 exited with status None for try_number 1 [2021-07-10 14:31:00,478] {scheduler_job.py:964} INFO - 2 tasks up for execution: <TaskInstance: michel.pod-ex-minimum_txt 2021-07-10 14:30:59.199174+00:00 [scheduled]> <TaskInstance: michel.pod-ex-minimum_json 2021-07-10 14:30:59.199174+00:00 [scheduled]> [2021-07-10 14:31:00,483] {scheduler_job.py:993} INFO - Figuring out tasks to run in Pool(name=default_pool) with 128 open slots and 2 task instances ready to be queued [2021-07-10 14:31:00,483] {scheduler_job.py:1021} INFO - DAG michel has 0/16 running and queued tasks [2021-07-10 14:31:00,484] {scheduler_job.py:1021} INFO - DAG michel has 1/16 running and queued tasks [2021-07-10 14:31:00,484] {scheduler_job.py:1086} INFO - Setting the following tasks to queued state: <TaskInstance: michel.pod-ex-minimum_txt 2021-07-10 14:30:59.199174+00:00 [scheduled]> <TaskInstance: michel.pod-ex-minimum_json 2021-07-10 14:30:59.199174+00:00 [scheduled]> [2021-07-10 14:31:00,492] {scheduler_job.py:1128} INFO - Sending TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) to executor with priority 1 and queue default [2021-07-10 14:31:00,492] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_txt', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'] [2021-07-10 14:31:00,493] {scheduler_job.py:1128} INFO - Sending TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) to executor with priority 1 and queue default [2021-07-10 14:31:00,493] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_json', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'] [2021-07-10 14:31:00,498] {kubernetes_executor.py:504} INFO - Add task TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) with command ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_txt', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'] with executor_config {} [2021-07-10 14:31:00,500] {kubernetes_executor.py:504} INFO - Add task TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1) with command ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_json', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'] with executor_config {} [2021-07-10 14:31:00,503] {kubernetes_executor.py:292} INFO - Kubernetes job is (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1), ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_txt', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'], None, None) [2021-07-10 14:31:00,558] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type ADDED [2021-07-10 14:31:00,558] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_txt execution_date=2021-07-10 14:30:59.199174+00:00 exited with status queued for try_number 1 [2021-07-10 14:31:00,559] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending [2021-07-10 14:31:00,559] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_json execution_date=2021-07-10 14:30:59.199174+00:00 exited with status queued for try_number 1 [2021-07-10 14:31:00,563] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED [2021-07-10 14:31:00,563] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending [2021-07-10 14:31:00,576] {scheduler_job.py:1249} INFO - Setting external_id for <TaskInstance: michel.pod-ex-minimum_json 2021-07-10 14:30:59.199174+00:00 [queued]> to 1 [2021-07-10 14:31:00,577] {scheduler_job.py:1249} INFO - Setting external_id for <TaskInstance: michel.pod-ex-minimum_txt 2021-07-10 14:30:59.199174+00:00 [queued]> to 1 [2021-07-10 14:31:00,621] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED [2021-07-10 14:31:00,622] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending [2021-07-10 14:31:00,719] {kubernetes_executor.py:292} INFO - Kubernetes job is (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=Timezone('UTC')), try_number=1), ['airflow', 'tasks', 'run', 'michel', 'pod-ex-minimum_json', '2021-07-10T14:30:59.199174+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/dags/repo/dags/k8s.py'], None, None) [2021-07-10 14:31:00,752] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type ADDED [2021-07-10 14:31:00,752] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending [2021-07-10 14:31:00,769] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED [2021-07-10 14:31:00,770] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending [2021-07-10 14:31:00,870] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED [2021-07-10 14:31:00,871] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending [2021-07-10 14:31:03,961] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED [2021-07-10 14:31:03,961] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending [2021-07-10 14:31:05,538] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED [2021-07-10 14:31:05,542] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending [2021-07-10 14:31:07,092] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED [2021-07-10 14:31:07,092] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Pending [2021-07-10 14:31:08,163] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED [2021-07-10 14:31:08,164] {kubernetes_executor.py:208} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b is Running [2021-07-10 14:31:08,818] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED [2021-07-10 14:31:08,820] {kubernetes_executor.py:200} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Pending [2021-07-10 14:31:09,924] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED [2021-07-10 14:31:09,925] {kubernetes_executor.py:208} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 is Running [2021-07-10 14:31:28,861] {dagrun.py:429} ERROR - Marking run <DagRun michel @ 2021-07-10 14:30:59.199174+00:00: manual__2021-07-10T14:30:59.199174+00:00, externally triggered: True> failed [2021-07-10 14:31:45,227] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 had an event of type MODIFIED [2021-07-10 14:31:45,227] {kubernetes_executor.py:205} INFO - Event: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1 Succeeded [2021-07-10 14:31:45,454] {kubernetes_executor.py:368} INFO - Attempting to finish pod; pod_id: michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1; state: None; annotations: {'dag_id': 'michel', 'task_id': 'pod-ex-minimum_json', 'execution_date': '2021-07-10T14:30:59.199174+00:00', 'try_number': '1'} [2021-07-10 14:31:45,459] {kubernetes_executor.py:546} INFO - Changing state of (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_json', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=tzlocal()), try_number=1), None, 'michelpodexminimumjson.db72c28bed7e4d0cad6cf8594bcbd4f1', 'default', '56653030468') to None [2021-07-10 14:31:45,463] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_json execution_date=2021-07-10 14:30:59.199174+00:00 exited with status None for try_number 1 [2021-07-10 14:31:47,817] {kubernetes_executor.py:147} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b had an event of type MODIFIED [2021-07-10 14:31:47,818] {kubernetes_executor.py:205} INFO - Event: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b Succeeded [2021-07-10 14:31:48,373] {kubernetes_executor.py:368} INFO - Attempting to finish pod; pod_id: michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b; state: None; annotations: {'dag_id': 'michel', 'task_id': 'pod-ex-minimum_txt', 'execution_date': '2021-07-10T14:30:59.199174+00:00', 'try_number': '1'} [2021-07-10 14:31:48,376] {kubernetes_executor.py:546} INFO - Changing state of (TaskInstanceKey(dag_id='michel', task_id='pod-ex-minimum_txt', execution_date=datetime.datetime(2021, 7, 10, 14, 30, 59, 199174, tzinfo=tzlocal()), try_number=1), None, 'michelpodexminimumtxt.a291f1d7ffeb45abb86c51c9b7b5a95b', 'default', '56653031774') to None [2021-07-10 14:31:48,378] {scheduler_job.py:1222} INFO - Executor reports execution of michel.pod-ex-minimum_txt execution_date=2021-07-10 14:30:59.199174+00:00 exited with status None for try_number 1 ``` Don't hesitate to ask me if you need more info
https://github.com/apache/airflow/issues/16922
https://github.com/apache/airflow/pull/16930
d3f300fba8c252cac79a1654fddb91532f44c656
b2c66e45b7c27d187491ec6a1dd5cc92ac7a1e32
"2021-07-10T14:35:22Z"
python
"2021-07-11T17:35:04Z"
closed
apache/airflow
https://github.com/apache/airflow
16,921
["airflow/providers/salesforce/operators/bulk.py", "airflow/providers/salesforce/provider.yaml", "docs/apache-airflow-providers-salesforce/operators/bulk.rst", "docs/apache-airflow-providers-salesforce/operators/index.rst", "tests/providers/salesforce/operators/test_bulk.py", "tests/system/providers/salesforce/example_bulk.py"]
Add support for Salesforce Bulk API
**Description** Salesforce Bulk API is very popular to retrieve/push data to Salesforce, a maximum of 10k records can be pushed in the bulk API. Add a separate hook to support bulk Api SalesforceBulkApiHook https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm **Use case / motivation** In a lot of organizations this might be useful for performing ETL from Bigquery or data storage platforms to Salesforce using Bulk API. **Are you willing to submit a PR?** Yes **Related Issues**
https://github.com/apache/airflow/issues/16921
https://github.com/apache/airflow/pull/24473
34b2ed4066794368f9bcf96b7ccd5a70ee342639
b6a27594174c888af31d3fc71ea5f8b589883a12
"2021-07-10T11:42:53Z"
python
"2022-07-05T05:17:07Z"
closed
apache/airflow
https://github.com/apache/airflow
16,911
["UPDATING.md", "airflow/providers/google/cloud/example_dags/example_dataproc.py", "docs/apache-airflow-providers-google/operators/cloud/dataproc.rst"]
Error in passing metadata to DataprocClusterCreateOperator
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> Hi, I am facing some issues while installing PIP Packages in the Dataproc cluster using Initialization script, I am trying to upgrade to Airflow 2.0 from 1.10.12 (where this code works fine) `` [2021-07-09 11:35:37,587] {taskinstance.py:1454} ERROR - metadata was invalid: [('PIP_PACKAGES', 'pyyaml requests pandas openpyxl'), ('x-goog-api-client', 'gl-python/3.7.10 grpc/1.35.0 gax/1.26.0 gccl/airflow_v2.0.0+astro.3') `` ```python path = f"gs://goog-dataproc-initialization-actions-{self.cfg.get('region')}/python/pip-install.sh" return DataprocClusterCreateOperator( ........ init_actions_uris=[path], metadata=[('PIP_PACKAGES', 'pyyaml requests pandas openpyxl')], ............ ) ``` **Apache Airflow version**: airflow_v2.0.0 **What happened**: I am trying to migrate our codebase from Airflow v1.10.12, on the deeper analysis found that as part refactoring in of below pr #6371, we can no longer pass **metadata** in DataprocClusterCreateOperator() as this is not being passed to ClusterGenerator() method. **What you expected to happen**: Operator should work as before.
https://github.com/apache/airflow/issues/16911
https://github.com/apache/airflow/pull/19446
0c9ce547594bad6451d9139676d0a5039d3ec182
48f228cf9ef7602df9bea6ce20d663ac0c4393e1
"2021-07-09T13:03:48Z"
python
"2021-11-15T21:39:26Z"
closed
apache/airflow
https://github.com/apache/airflow
16,907
["docs/apache-airflow/concepts/scheduler.rst"]
Add tests suite for MariaDB 10.6+ and fix incompatibilities
It seems that MariaDB 10.6 has added support for SKIP...LOCKED, so we could theoretically easily officially support MariaDB database (possibly with fixing some small issues resulting for test suite execution). It would require to add `mariadb` backend similarly as we added `mssql` backend in those three PRs: #9973, #16103, #16134
https://github.com/apache/airflow/issues/16907
https://github.com/apache/airflow/pull/17287
9cd5a97654fa82f1d4d8f599e8eb81957b3f7286
6c9eab3ea0697b82f11acf79656129604ec0e8f7
"2021-07-09T08:03:08Z"
python
"2021-07-28T16:56:04Z"
closed
apache/airflow
https://github.com/apache/airflow
16,891
["CONTRIBUTING.rst", "airflow/providers/amazon/aws/example_dags/example_salesforce_to_s3.py", "airflow/providers/amazon/aws/transfers/salesforce_to_s3.py", "airflow/providers/amazon/provider.yaml", "airflow/providers/dependencies.json", "docs/apache-airflow-providers-amazon/operators/salesforce_to_s3.rst", "tests/providers/amazon/aws/transfers/test_salesforce_to_s3.py"]
Add a SalesforceToS3Operator to push Salesforce data to an S3 bucket
**Description** Currently an operator exists to copy Salesforce data to Google Cloud Storage (`SalesforceToGcsOperator`) but a similar operator for an S3 destination is absent. Since S3 is widely used as part of general storage/data lakes within data pipelines as well as Salesforce to manage a slew of marketing, customer, and sales data, this operator seems beneficial. **Use case / motivation** Undoubtedly there are use cases to extract Salesforce into an S3 bucket, perhaps augmenting a data warehouse with said data downstream, or ensuring the data is sync'd to a data lake as part of a modern data architecture. I imagine users are currently building custom operators to do so in Airflow (if not taking advantage of an external service to handle the copy/sync). Ideally this functionality would be included within the AWS provider as well as help provide some feature parity with Google Cloud in Airflow. **Are you willing to submit a PR?** Yes 🚀 **Related Issues** None that I can find.
https://github.com/apache/airflow/issues/16891
https://github.com/apache/airflow/pull/17094
038b87ecfa4466a405bcf7243872ef927800b582
32582b5bf1432e7c7603b959a675cf7edd76c9e6
"2021-07-08T17:50:50Z"
python
"2021-07-21T16:31:14Z"
closed
apache/airflow
https://github.com/apache/airflow
16,887
["airflow/www/static/js/graph.js"]
Show duration of a task group
**Description** Show the duration of the task group in the Airflow UI. ![image](https://user-images.githubusercontent.com/706385/124945371-5e3aae00-e00e-11eb-8050-6fa988e09191.png) **Use case / motivation** The task groups are a nice way to encapsulate multiple tasks. However, in the Graph view, the duration of the grouped tasks isn't visible. You need to expand the group to view them. It's possible to view the task durations in the Task Duration view, but that isn't as convenient if you want to zoom in on a particular section of the pipeline. **Are you willing to submit a PR?** Yes, but I'm not familiar with the codebase. If it a relatively easy fix, I would appreciate some guidance on which files to touch. **Related Issues** None
https://github.com/apache/airflow/issues/16887
https://github.com/apache/airflow/pull/18406
c1f34bdb9fefe1b0bc8ce2a69244c956724f4c48
deb01dd806fac67e71e706cd2c00a7a8681c512a
"2021-07-08T15:03:39Z"
python
"2021-09-23T11:56:57Z"
closed
apache/airflow
https://github.com/apache/airflow
16,881
["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"]
Re-deploy scheduler tasks failing with SIGTERM on K8s executor
**Apache Airflow version**: 2.1.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.18.17-gke.1901 **Environment**: - **Cloud provider or hardware configuration**: Google Cloud - **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster) - **Kernel** (e.g. `uname -a`): Linux airflow-scheduler-7697b66974-m6mct 5.4.89+ #1 SMP Sat Feb 13 19:45:14 PST 2021 x86_64 GNU/Linux - **Install tools**: - **Others**: **What happened**: When the `scheduler` is restarted the currently running tasks are facing SIGTERM error. Every time the `scheduler` is restarted or re-deployed then the current `scheduler` is terminated and a new `scheduler` is created. If during this process exist tasks running the new `scheduler` will terminate these tasks with `complete` status and new tasks will be created to continue the work of the terminated ones. After few seconds the new tasks are terminated with `error` status and SIGTERM error. <details><summary>Error log</summary> [2021-07-07 14:59:49,024] {cursor.py:661} INFO - query execution done [2021-07-07 14:59:49,025] {arrow_result.pyx:0} INFO - fetching data done [2021-07-07 15:00:07,361] {local_task_job.py:196} WARNING - State of this instance has been externally set to failed. Terminating instance. [2021-07-07 15:00:07,363] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 150 [2021-07-07 15:00:12,845] {taskinstance.py:1264} ERROR - Received SIGTERM. Terminating subprocesses. [2021-07-07 15:00:12,907] {process_utils.py:66} INFO - Process psutil.Process(pid=150, status='terminated', exitcode=0, started='14:59:46') (150) terminated with exit code 0 </details> **What you expected to happen**: The tasks currently running should be allowed to finish their process or the substitute tasks should execute their process with success. The new `scheduler` should interfere with the running tasks. **How to reproduce it**: To reproduce is necessary to start a DAG that has some task(s) that take some minutes to be completed. During this task(s) processing a new deploy for `scheduler` should be executed. During the re-deploy, the current `scheduler` will be terminated and a new one will be created. The current task(s) will be completed (without finish their processing) and substituted for new ones that will fail in seconds. **Anything else we need to know**: The problem was not happening with Airflow 1.10.15 and it started to happens after the upgrade to Airflow 2.1.0.
https://github.com/apache/airflow/issues/16881
https://github.com/apache/airflow/pull/19375
e57c74263884ad5827a5bb9973eb698f0c269cc8
38d329bd112e8be891f077b4e3300182930cf74d
"2021-07-08T08:43:07Z"
python
"2021-11-03T06:45:41Z"
closed
apache/airflow
https://github.com/apache/airflow
16,880
["airflow/providers/amazon/aws/sensors/sqs.py", "setup.py", "tests/providers/amazon/aws/sensors/test_sqs.py"]
Improve AWS SQS Sensor
**Description** Improve the AWS SQS Sensor as follows: + Add optional visibility_timeout parameter + Add a customisable / overrideable filter capability so we can filter/ignore irrelevant messages [Not needed, see below conversation] --- Check the HTTP status code in AWS response and raise Exception if not 200 - best practice **Use case / motivation** I'd like to make the SQS sensor more flexible to enable the following use case: + A single queue can be used as a channel for messages from multiple event sources and or multiple targets + We need a way to filter and ignore messages not relevant to us, which other processes are looking for **Are you willing to submit a PR?** Yes, please assign to me **Related Issues** None
https://github.com/apache/airflow/issues/16880
https://github.com/apache/airflow/pull/16904
2c1880a90712aa79dd7c16c78a93b343cd312268
d28efbfb7780afd1ff13a258dc5dc3e3381ddabd
"2021-07-08T08:11:56Z"
python
"2021-08-02T20:47:10Z"
closed
apache/airflow
https://github.com/apache/airflow
16,877
["airflow/www/static/js/tree.js"]
Cleared task instances in manual runs should have borders
**Description** Task instances in manual runs do not display with a border, except when the task instance is non-existent (after adding tasks to an existing DAG). Hence, when an _existing_ task instance is cleared, it is displayed without a border, causing it to disappear into the background. To be consistent, existing task instances that are cleared should also be drawn with borders. Here, `task2a` and `task2b` are newly-added tasks and have `no_status`. They are displayed with borders: ![image](https://user-images.githubusercontent.com/40527812/124863865-ba4cf600-df6c-11eb-932f-f2515ecb3914.png) Afterwards, `task1a` and `task1b` are cleared and lose their borders: ![image](https://user-images.githubusercontent.com/40527812/124863991-fb450a80-df6c-11eb-8a5c-f9a0ca6b478e.png) **Use case / motivation** To prevent the task instances from disappearing into the background. **Are you willing to submit a PR?** Yes, but would need ramp-up time as I am new to front-end. **Related Issues** Split from #16824.
https://github.com/apache/airflow/issues/16877
https://github.com/apache/airflow/pull/18033
a8184e42ce9d9b7f6b409f07c1e2da0138352ef3
d856b79a1ddab030ab3e873ae2245738b949c30a
"2021-07-08T05:01:33Z"
python
"2021-09-21T13:32:53Z"
closed
apache/airflow
https://github.com/apache/airflow
16,844
["airflow/api_connexion/openapi/v1.yaml"]
Rest API: allow filtering DagRuns by state.
**Add state filter to the dag runs list API endpoint** One feature available in the "Browse / Dag Runs" page but not in the current rest API is the ability to filter runs of a specific state(s). Example use-case: this would let a client efficiently fetch the number of "queued" and "running" runs, or look at recent failed runs. Ideally, the current `/dags/{dag_id}/dagRuns` and `/dags/~/dagRuns/list` endpoints would each get updated to support an additional parameter called `state`. This parameter could be given multiple times and act as a logical "OR" (just like `dag_ids` in the POST endpoint, or like `state` in the task instances endpoint). The web UI page offers more fancy filters like "includes", but for something with a finite number of values like `state`, it doesn't feel necessary for the API.
https://github.com/apache/airflow/issues/16844
https://github.com/apache/airflow/pull/20697
4fa9cfd7de13cd79956fbb68f8416a5a019465a4
376da6a969a3bb13a06382a38ab467a92fee0179
"2021-07-07T00:03:32Z"
python
"2022-01-06T10:10:59Z"
closed
apache/airflow
https://github.com/apache/airflow
16,834
["airflow/utils/log/file_task_handler.py"]
Airflow dashboard cannot load logs containing emoji
**Apache Airflow version**: 2.1.0 **What happened**: When printing emoji to a DAG log, the Airflow dashboard fails to display the entire log. When checking the output log in the Airflow dashboard, the following error message appears: > *** Failed to load local log file: /tmp/dag_name/task_name/2021-07-06T10:49:18.136953+00:00/1.log > *** 'ascii' codec can't decode byte 0xf0 in position 3424: ordinal not in range(128) **What you expected to happen**: The log should be displayed. <!-- What do you think went wrong? --> **How to reproduce it**: Insert the following into any Python DAG, then run it. `print("💼")` **How often does this problem occur?** Every log with an emoji in it prints an error. **Why would anyone even want to print emoji in their logs?** When they're part of the dataset you're processing.
https://github.com/apache/airflow/issues/16834
https://github.com/apache/airflow/pull/17965
02397761af7ed77b0e7c4f4d8de34d8a861c5b40
2f1ed34a7ec699bd027004d1ada847ed15f4aa4b
"2021-07-06T12:39:54Z"
python
"2021-09-12T17:57:14Z"
closed
apache/airflow
https://github.com/apache/airflow
16,833
["docs/helm-chart/customizing-workers.rst", "docs/helm-chart/index.rst"]
Chart: Add docs on using custom pod-template
Currently, we allow users to use their own `podTemplate` yaml using the `podTemplate` key in `values.yaml`. Some users have passed the name of the file in `podTemplate` instead of YAML string. We should have a dedicated page on how a user could do that and add an example in `values.yaml` file itself. https://airflow.apache.org/docs/helm-chart/stable/parameters-ref.html https://github.com/apache/airflow/blob/81fde5844de37e90917deaaff9576914cb2637ee/chart/values.yaml#L1123-L1125 https://github.com/apache/airflow/blob/81fde5844de37e90917deaaff9576914cb2637ee/chart/templates/configmaps/configmap.yaml#L59-L65
https://github.com/apache/airflow/issues/16833
https://github.com/apache/airflow/pull/20331
e148bf6b99b9b62415a7dd9fbfa594e0f5759390
8192a801f3090c4da19427819d551405c58d37e5
"2021-07-06T12:37:35Z"
python
"2021-12-16T17:19:05Z"
closed
apache/airflow
https://github.com/apache/airflow
16,806
["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"]
Error mounting /tmp/airflowtmp... with remote docker
**Apache Airflow version**: v2.1.0 **Environment**: - **Cloud provider or hardware configuration**: ec2 t3a.medium - **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS - **Kernel** (e.g. `uname -a`): 5.4.0-1051-aws - **Install tools**: sudo pip3 install apache-airflow[mysql,ssh,docker,amazon] - **Others**: python 3.6.9 **What happened**: Task fails with error: ```none docker.errors.APIError: 400 Client Error for http://192.168.1.50:2375/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp7naq_r53") ``` **How to reproduce it**: Create an separate EC2 instance and forward the docker daemon: ```shell sudo mkdir -p /etc/systemd/system/docker.service.d sudo touch /etc/systemd/system/docker.service.d/options.conf echo -e """ [Service] ExecStart= ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375 """ >> /etc/systemd/system/docker.service.d/options.conf sudo systemctl daemon-reload sudo systemctl restart docker ``` Create dag with DockerOperator ```python DockerOperator( task_id="run_image", docker_url="tcp://192.168.1.50:2375", image="ubuntu:latest", dag=dag, ) ``` Run the DAG. **Anything else we need to know**: To me it looks like the DockerOperator is creating a temporary directory locally and tries to bind it to the container. However as this is a remote container the directory doesn't exist. here is the code part: ```python class DockerOperator(BaseOperator): ... def _run_image(self) -> Optional[str]: """Run a Docker container with the provided image""" self.log.info('Starting docker container from image %s', self.image) with TemporaryDirectory(prefix='airflowtmp', dir=self.host_tmp_dir) as host_tmp_dir: if not self.cli: raise Exception("The 'cli' should be initialized before!") tmp_mount = Mount(self.tmp_dir, host_tmp_dir, "bind") self.container = self.cli.create_container( command=self.format_command(self.command), name=self.container_name, environment={**self.environment, **self._private_environment}, host_config=self.cli.create_host_config( auto_remove=False, mounts=self.mounts + [tmp_mount], network_mode=self.network_mode, shm_size=self.shm_size, dns=self.dns, dns_search=self.dns_search, cpu_shares=int(round(self.cpus * 1024)), mem_limit=self.mem_limit, cap_add=self.cap_add, extra_hosts=self.extra_hosts, privileged=self.privileged, ), image=self.image, user=self.user, entrypoint=self.format_command(self.entrypoint), working_dir=self.working_dir, tty=self.tty, ) ``` I see no way of disabling this behavior without some major patching. How are you guys using remote docker daemons? Is this a use case? Would it be possible to implement something to allow that?
https://github.com/apache/airflow/issues/16806
https://github.com/apache/airflow/pull/16932
fc0250f1d5c43784f353dbdf4a34089aa96c28e5
bc004151ed6924ee7bec5d9d047aedb4873806da
"2021-07-05T08:35:47Z"
python
"2021-07-15T04:35:25Z"
closed
apache/airflow
https://github.com/apache/airflow
16,783
["airflow/www/auth.py", "airflow/www/templates/airflow/no_roles.html", "airflow/www/views.py", "tests/www/views/test_views_acl.py"]
Airflow 2.1.0 Oauth for google Too Many Redirects b/c Google User does not have Role
The issue is similar to this ticket [16587](https://github.com/apache/airflow/issues/16587) and [14829](https://github.com/apache/airflow/issues/14829) however I have an updated airflow version AND updated packages than the ones suggested here and I am still getting the same outcome. When using google auth in airflow and attempting to sign in, we get an ERR_TOO_MANY_REDIRECTS. I know what causes the symptom of this, but hoping to find a resolution of keeping a Role in place to avoid the REDIRECTS. - **Apache Airflow version**: Version: v2.1.0 Git Version: .release:2.1.0+304e174674ff6921cb7ed79c0158949b50eff8fe - **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.10-gke.1600", GitCommit:"7b8e568a7fb4c9d199c2ba29a5f7d76f6b4341c2", GitTreeState:"clean", BuildDate:"2021-05-07T09:18:53Z", GoVersion:"go1.15.10b5", Compiler:"gc", Platform:"linux/amd64"} - **Environment**: Staging - **Cloud provider or hardware configuration**: GKE on - **OS** (e.g. from /etc/os-release): PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" - **Kernel** (e.g. `uname -a`): Linux margins-scheduler-97b6fb867-fth8p 5.4.89+ #1 SMP Sat Feb 13 19:45:14 PST 2021 x86_64 GNU/Linux - **Install tools**: pip freeze below **What happened**: When using google auth in airflow and attempting to sign in, we get an ERR_TOO_MANY_REDIRECTS. **What you expected to happen**: I expect to log in as my user and it assigns a default Role of Viewer at the very least OR uses our mappings in web_server config python file. But the Role is blank in Database. <!-- What do you think went wrong? --> We realized that we get stuck in the loop, b/c the user will be in the users table in airflow but without a Role (its literally empty). Therefore it goes from the /login to /home to /login to /home over and over again. **How to reproduce it**: I add the Admin role in the database for my user, and the page that has the redirects refreshes and lets me in to the Airflow UI. However, when I sign out and signin in again, my users Role is then erased and it starts the redirect cycle again. As you can see there is no Role (this happens when I attempt to login) ``` id | username | email | first_name | last_name | roles ===+==============================+=========================+============+===========+====== 1 | admin | [email protected] | admin | admin | Admin 2 | google_############ | [email protected] | Cat | Says | ``` I run the command: `airflow users add-role -r Admin -u google_#################` Then the page takes me to the UI and the table now looks like this: ``` id | username | email | first_name | last_name | roles ===+==============================+=========================+============+===========+====== 1 | admin | [email protected] | admin | admin | Admin 2 | google_############ | [email protected] | Cat | Says | Admin ``` How often does this problem occur? Once? Every time etc? This occurs all the time Here is the webserver_config.py ``` import os from flask_appbuilder.security.manager import AUTH_OAUTH AUTH_TYPE = AUTH_OAUTH AUTH_ROLE_ADMIN="Admin" AUTH_USER_REGISTRATION = False AUTH_USER_REGISTRATION_ROLE = "Admin" OIDC_COOKIE_SECURE = False CSRF_ENABLED = False WTF_CSRF_ENABLED = True AUTH_ROLES_MAPPING = {"Engineering": ["Ops"],"Admins": ["Admin"]} AUTH_ROLES_SYNC_AT_LOGIN = True OAUTH_PROVIDERS = [ { 'name': 'google', 'icon': 'fa-google', 'token_key': 'access_token', 'remote_app': { 'client_id': '#####################.apps.googleusercontent.com', 'client_secret': '######################', 'api_base_url': 'https://www.googleapis.com/oauth2/v2/', 'whitelist': ['@company.com'], # optional 'client_kwargs': { 'scope': 'email profile' }, 'request_token_url': None, 'access_token_url': 'https://accounts.google.com/o/oauth2/token', 'authorize_url': 'https://accounts.google.com/o/oauth2/auth'}, } ] ``` Here is the pip freeze: ``` adal==1.2.7 alembic==1.6.2 amqp==2.6.1 anyio==3.2.1 apache-airflow==2.1.0 apache-airflow-providers-amazon==1.4.0 apache-airflow-providers-celery==1.0.1 apache-airflow-providers-cncf-kubernetes==1.2.0 apache-airflow-providers-docker==1.2.0 apache-airflow-providers-elasticsearch==1.0.4 apache-airflow-providers-ftp==1.1.0 apache-airflow-providers-google==3.0.0 apache-airflow-providers-grpc==1.1.0 apache-airflow-providers-hashicorp==1.0.2 apache-airflow-providers-http==1.1.1 apache-airflow-providers-imap==1.0.1 apache-airflow-providers-microsoft-azure==2.0.0 apache-airflow-providers-mysql==1.1.0 apache-airflow-providers-postgres==1.0.2 apache-airflow-providers-redis==1.0.1 apache-airflow-providers-sendgrid==1.0.2 apache-airflow-providers-sftp==1.2.0 apache-airflow-providers-slack==3.0.0 apache-airflow-providers-sqlite==1.0.2 apache-airflow-providers-ssh==1.3.0 apispec==3.3.2 appdirs==1.4.4 argcomplete==1.12.3 async-generator==1.10 attrs==20.3.0 azure-batch==10.0.0 azure-common==1.1.27 azure-core==1.13.0 azure-cosmos==3.2.0 azure-datalake-store==0.0.52 azure-identity==1.5.0 azure-keyvault==4.1.0 azure-keyvault-certificates==4.2.1 azure-keyvault-keys==4.3.1 azure-keyvault-secrets==4.2.0 azure-kusto-data==0.0.45 azure-mgmt-containerinstance==1.5.0 azure-mgmt-core==1.2.2 azure-mgmt-datafactory==1.1.0 azure-mgmt-datalake-nspkg==3.0.1 azure-mgmt-datalake-store==0.5.0 azure-mgmt-nspkg==3.0.2 azure-mgmt-resource==16.1.0 azure-nspkg==3.0.2 azure-storage-blob==12.8.1 azure-storage-common==2.1.0 azure-storage-file==2.1.0 Babel==2.9.1 bcrypt==3.2.0 billiard==3.6.4.0 blinker==1.4 boto3==1.17.71 botocore==1.20.71 cached-property==1.5.2 cachetools==4.2.2 cattrs==1.0.0 celery==4.4.7 certifi==2020.12.5 cffi==1.14.5 chardet==3.0.4 click==7.1.2 clickclick==20.10.2 cloudpickle==1.4.1 colorama==0.4.4 colorlog==5.0.1 commonmark==0.9.1 contextvars==2.4 croniter==1.0.13 cryptography==3.4.7 dask==2021.3.0 dataclasses==0.7 defusedxml==0.7.1 dill==0.3.1.1 distlib==0.3.1 distributed==2.19.0 dnspython==1.16.0 docker==3.7.3 docker-pycreds==0.4.0 docutils==0.17.1 elasticsearch==7.5.1 elasticsearch-dbapi==0.1.0 elasticsearch-dsl==7.3.0 email-validator==1.1.2 eventlet==0.31.0 filelock==3.0.12 Flask==1.1.2 Flask-AppBuilder==3.3.0 Flask-Babel==1.0.0 Flask-Caching==1.10.1 Flask-JWT-Extended==3.25.1 Flask-Login==0.4.1 Flask-OpenID==1.2.5 Flask-SQLAlchemy==2.5.1 Flask-WTF==0.14.3 flower==0.9.7 gevent==21.1.2 google-ads==4.0.0 google-api-core==1.26.3 google-api-python-client==1.12.8 google-auth==1.30.0 google-auth-httplib2==0.1.0 google-auth-oauthlib==0.4.4 google-cloud-automl==2.3.0 google-cloud-bigquery==2.16.0 google-cloud-bigquery-datatransfer==3.1.1 google-cloud-bigquery-storage==2.4.0 google-cloud-bigtable==1.7.0 google-cloud-container==1.0.1 google-cloud-core==1.6.0 google-cloud-datacatalog==3.1.1 google-cloud-dataproc==2.3.1 google-cloud-dlp==1.0.0 google-cloud-kms==2.2.0 google-cloud-language==1.3.0 google-cloud-logging==2.3.1 google-cloud-memcache==0.3.0 google-cloud-monitoring==2.2.1 google-cloud-os-login==2.1.0 google-cloud-pubsub==2.4.2 google-cloud-redis==2.1.0 google-cloud-secret-manager==1.0.0 google-cloud-spanner==1.19.1 google-cloud-speech==1.3.2 google-cloud-storage==1.38.0 google-cloud-tasks==2.2.0 google-cloud-texttospeech==1.0.1 google-cloud-translate==1.7.0 google-cloud-videointelligence==1.16.1 google-cloud-vision==1.0.0 google-cloud-workflows==0.3.0 google-crc32c==1.1.2 google-resumable-media==1.2.0 googleapis-common-protos==1.53.0 graphviz==0.16 greenlet==1.1.0 grpc-google-iam-v1==0.12.3 grpcio==1.37.1 grpcio-gcp==0.2.2 gunicorn==20.1.0 h11==0.12.0 HeapDict==1.0.1 httpcore==0.13.6 httplib2==0.17.4 httpx==0.18.2 humanize==3.5.0 hvac==0.10.11 idna==2.10 immutables==0.15 importlib-metadata==1.7.0 importlib-resources==1.5.0 inflection==0.5.1 iso8601==0.1.14 isodate==0.6.0 itsdangerous==1.1.0 Jinja2==2.11.3 jmespath==0.10.0 json-merge-patch==0.2 jsonschema==3.2.0 kombu==4.6.11 kubernetes==11.0.0 lazy-object-proxy==1.4.3 ldap3==2.9 libcst==0.3.18 lockfile==0.12.2 Mako==1.1.4 Markdown==3.3.4 MarkupSafe==1.1.1 marshmallow==3.12.1 marshmallow-enum==1.5.1 marshmallow-oneofschema==2.1.0 marshmallow-sqlalchemy==0.23.1 msal==1.11.0 msal-extensions==0.3.0 msgpack==1.0.2 msrest==0.6.21 msrestazure==0.6.4 mypy-extensions==0.4.3 mysql-connector-python==8.0.22 mysqlclient==2.0.3 numpy==1.19.5 oauthlib==2.1.0 openapi-schema-validator==0.1.5 openapi-spec-validator==0.3.0 packaging==20.9 pandas==1.1.5 pandas-gbq==0.14.1 paramiko==2.7.2 pendulum==2.1.2 pep562==1.0 plyvel==1.3.0 portalocker==1.7.1 prison==0.1.3 prometheus-client==0.8.0 proto-plus==1.18.1 protobuf==3.16.0 psutil==5.8.0 psycopg2-binary==2.8.6 pyarrow==3.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.20 pydata-google-auth==1.2.0 Pygments==2.9.0 PyJWT==1.7.1 PyNaCl==1.4.0 pyOpenSSL==19.1.0 pyparsing==2.4.7 pyrsistent==0.17.3 pysftp==0.2.9 python-daemon==2.3.0 python-dateutil==2.8.1 python-editor==1.0.4 python-http-client==3.3.2 python-ldap==3.3.1 python-nvd3==0.15.0 python-slugify==4.0.1 python3-openid==3.2.0 pytz==2021.1 pytzdata==2020.1 PyYAML==5.4.1 redis==3.5.3 requests==2.25.1 requests-oauthlib==1.1.0 rfc3986==1.5.0 rich==9.2.0 rsa==4.7.2 s3transfer==0.4.2 sendgrid==6.7.0 setproctitle==1.2.2 six==1.16.0 slack-sdk==3.5.1 sniffio==1.2.0 sortedcontainers==2.3.0 SQLAlchemy==1.3.24 SQLAlchemy-JSONField==1.0.0 SQLAlchemy-Utils==0.37.2 sshtunnel==0.1.5 starkbank-ecdsa==1.1.0 statsd==3.3.0 swagger-ui-bundle==0.0.8 tabulate==0.8.9 tblib==1.7.0 tenacity==6.2.0 termcolor==1.1.0 text-unidecode==1.3 toolz==0.11.1 tornado==6.1 typing==3.7.4.3 typing-extensions==3.7.4.3 typing-inspect==0.6.0 unicodecsv==0.14.1 uritemplate==3.0.1 urllib3==1.25.11 vine==1.3.0 virtualenv==20.4.6 watchtower==0.7.3 websocket-client==0.59.0 Werkzeug==1.0.1 WTForms==2.3.3 zict==2.0.0 zipp==3.4.1 zope.event==4.5.0 zope.interface==5.4.0 ``` Thanks in advance.
https://github.com/apache/airflow/issues/16783
https://github.com/apache/airflow/pull/17613
d8c0cfea5ff679dc2de55220f8fc500fadef1093
6868ca48b29915aae8c131d694ea851cff1717de
"2021-07-02T21:26:19Z"
python
"2021-08-18T11:56:09Z"
closed
apache/airflow
https://github.com/apache/airflow
16,770
["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_base_aws.py"]
AWS hook should automatically refresh credentials when using temporary credentials
**Apache Airflow version**: 1.10.8 (Patched with latest AWS Hook) **Environment**: - **Cloud provider or hardware configuration**: 4 VCPU 8GB RAM VM - **OS** (e.g. from /etc/os-release): RHEL 7.7 - **Kernel** (e.g. `uname -a`): Linux 3.10.0-957.el7.x86_64 - **Install tools**: - **Others**: The AWS Hook functionality for AssumeRoleWithSAML is not available in this version, we manually added it via patching the hook file. **What happened**: We've been using this hook for a while now with this issue, basically sts.assume_role and sts.assume_role_with_saml will return temporary credentials that are only valid for eg 1 hour by default. Eventually with long running operators / hooks / sensors some of them fail because the credentials have expired. Example error messages An error occurred (ExpiredTokenException) when calling the AssumeRole operation: Response has expired An error occurred (ExpiredTokenException) when calling the AssumeRoleWithSAML operation: Response has expired botocore.exceptions.ClientError: An error occurred (ExpiredTokenException) when calling the <any operation here> operation: The security token included in the request is expired **What you expected to happen**: AWS hook should be updated to use boto3 RefreshableCredentials when temporary credentials are in use. **How to reproduce it**: Use any of the assume role methods with the AWS Hook, create a session, wait 1 hour (or whatever expiry period applies to your role), and try and use the hook again. **Anything else we need to know**: I have a solution, please self-assign this.
https://github.com/apache/airflow/issues/16770
https://github.com/apache/airflow/pull/16771
44210237cc59d463cd13983dd6d1593e3bcb8b87
f0df184e4db940f7e1b9248b5f5843d494034112
"2021-07-02T08:50:30Z"
python
"2021-07-06T22:10:06Z"
closed
apache/airflow
https://github.com/apache/airflow
16,753
["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"]
Realtime ECS logging
**Description** Currently when `ECSOperator` is run, the logs of the ECS task are fetched only when the task is done. That's not so convenient, especially when the task takes some good amount of time. In order to understand what's happening with the task, I need to go to Cloudwatch and search for the tasks logs. It would be good to have some parallel process that could fetch ECS task logs from Cloudwatch and make them visible in a realtime. **Are you willing to submit a PR?** I can try, but I need to be guided.
https://github.com/apache/airflow/issues/16753
https://github.com/apache/airflow/pull/17626
27088c4533199a19e6f810abc4e565bc8e107cf0
4cd190c9bcbe4229de3c8527d0e3480dea3be42f
"2021-07-01T10:49:05Z"
python
"2021-09-18T18:25:37Z"
closed
apache/airflow
https://github.com/apache/airflow
16,736
["BREEZE.rst", "breeze-complete", "scripts/ci/libraries/_initialization.sh", "scripts/ci/libraries/_kind.sh"]
The Helm Chart tests often timeout at installation recently in CI
Example here: https://github.com/apache/airflow/runs/2954825449#step:8:1950
https://github.com/apache/airflow/issues/16736
https://github.com/apache/airflow/pull/16750
fa811057a6ae0fc6c5e4bff1e18971c262a42a4c
e40c5a268d8dc24d1e6b00744308ef705224cb66
"2021-06-30T18:10:18Z"
python
"2021-07-01T12:29:52Z"
closed
apache/airflow
https://github.com/apache/airflow
16,730
["airflow/providers/amazon/aws/example_dags/example_s3_to_sftp.py", "airflow/providers/amazon/aws/example_dags/example_sftp_to_s3.py", "airflow/providers/amazon/aws/transfers/s3_to_sftp.py", "airflow/providers/amazon/aws/transfers/sftp_to_s3.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/transfer/s3_to_sftp.rst", "docs/apache-airflow-providers-amazon/operators/transfer/sftp_to_s3.rst"]
SFTPToS3Operator is not mentioned in the apache-airflow-providers-amazon > operators documentation
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.2.0 , apache-airflow-providers-amazon == 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: <!-- (please include exact error messages if you can) --> **What you expected to happen**: <!-- What do you think went wrong? --> I would expect to find the documentation for `airflow.providers.amazon.aws.transfers.sftp_to_s3.SFTPToS3Operator` in one these locations https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/index.html or https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/transfer/index.html **How to reproduce it**: <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/16730
https://github.com/apache/airflow/pull/16964
d1e9d8c88441dce5e2f64a9c7594368d662a8d95
cda78333b4ce9304abe315ab1afe41efe17fd2da
"2021-06-30T08:20:24Z"
python
"2021-07-18T17:21:10Z"
closed
apache/airflow
https://github.com/apache/airflow
16,725
["airflow/sensors/filesystem.py", "tests/sensors/test_filesystem.py"]
filesensor wildcard matching does not recognize directories
**Apache Airflow version**: 2.1.0 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: FileSensor does not recognize directories with wildcard glob matching. **What you expected to happen**: FileSensor would sense a directory that contains files if it matches with the wild card option. **How to reproduce it**: Create a directory with a pattern that matches a wild card using glob **Anything else we need to know**: Code from FileSensor source that I believe to cause the issue: ``` for path in glob(full_path): if os.path.isfile(path): mod_time = os.path.getmtime(path) mod_time = datetime.datetime.fromtimestamp(mod_time).strftime('%Y%m%d%H%M%S') self.log.info('Found File %s last modified: %s', str(path), str(mod_time)) return True for _, _, files in os.walk(full_path): if len(files) > 0: return True return False ``` I believe to resolve the issue `full_path` in os.walk should be `path` instead.
https://github.com/apache/airflow/issues/16725
https://github.com/apache/airflow/pull/16894
83cb237031dfe5b7cb5238cc1409ce71fd9507b7
789e0eaee8fa9dc35b27c49cc50a62ea4f635978
"2021-06-30T01:55:11Z"
python
"2021-07-12T21:23:36Z"
closed
apache/airflow
https://github.com/apache/airflow
16,705
["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"]
Ability to add multiple task_ids in the ExternalTaskSensor
**Description** In its current shape the ExternalTaskSensor accepts either a single task_id or None to poll for the completion of a dag run. We have a use case where a dependent dag should poll for only certain list of tasks in the upstream dag. One option is to add N ExternalTaskSensor nodes if there are N nodes to be dependent on but those will be too many Sensor Nodes in the dag and can be avoided if the ExternalTaskSensor can accept a list of task_ids to poll for. **Use case / motivation** We have a upstream dag that updates a list of hive tables that is further used by a lot of downstream dags. This dag updates 100s of hive tables but some of the downstream dag depends only upon 10-20 of these tables, There are multiple dags which depends upon varied list of hive tables from the upstream dag. **Are you willing to submit a PR?** Yes, we are willing to submit a PR for this **Related Issues** Not that I am aware of . I did a search on the issue list.
https://github.com/apache/airflow/issues/16705
https://github.com/apache/airflow/pull/17339
32475facce68a17d3e14d07762f63438e1527476
6040125faf9c6cbadce0a4f5113f1c5c3e584d66
"2021-06-29T13:11:12Z"
python
"2021-08-19T01:28:40Z"
closed
apache/airflow
https://github.com/apache/airflow
16,703
["airflow/www/package.json", "airflow/www/yarn.lock"]
Workers silently crash after memory build up
**Apache Airflow version**: 2.0.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.15 **Environment**: - **Cloud provider or hardware configuration**: AWS, ec2 servers deployed by kops - **OS** (e.g. from /etc/os-release): Ubuntu 20.04 - **Kernel** (e.g. `uname -a`): Linux 5.4.0-1024-aws # 24-Ubuntu - **Install tools**: Dockerfile - **Others**: Custom Dockerfile (not official airflow image from dockerhub) Celery Workers **What happened**: Memory usage builds up on our celery worker pods until they silently crash. Resource usage flat lines and no logs are created by the worker. The process is still running and Celery (verified via ping and flower) thinks the workers are up and running. No tasks are finished by Airflow, the schedulers are running fine and still logging appropriately but the workers are doing nothing. Workers do not accept any tasks and inflight jobs hang. They do not log an error message and the pod is not restarted as the process hasn't crashed. Our workers do not all crash at the same time, it happens over a couple of hours even if they were all restarted at the same time, so it seems to be related to how many jobs the worker has done/logs/other-non-time event. I believe this is related to the logs generated by the workers, Airflow appears to be reading in the existing log files to memory. Memory usage drops massively when the log files are deleted and then resume to build up again. There doesn't appear to be a definite upper limit of memory that the pod hits when it crashes, but its around the 8 or 10GB mark (there is 14 available to the pods but they dont hit that). Log size on disk correlates to more memory usage by a worker pod than one with smaller log size on disk. **What you expected to happen**: If the worker has crashed/ceased functioning it should either be able to log an appropriate message if the process is up or crash cleanly and be able to be restarted. Existing log files should not contribute to the memory usage of the airflow process either. Celery should also be able to detect that the worker is no longer functional. **How to reproduce it**: Run an airflow cluster with 40+ DAGs with several hundred tasks in total in an environment that has observable metrics, we use k8s with Prometheus. We have 5x worker pods. Monitor the memory usage of the worker containers/pods over time as well as the size of the airflow task logs. The trend should only increase. **Anything else we need to know**: This problem occurs constantly, after a clean deployment and in multiple environments. The official Airflow docker image contains a [log-cleaner](https://github.com/apache/airflow/blob/main/scripts/in_container/prod/clean-logs.sh) so its possible this has been avoided but in general 15 days default would be far too long. Our workers crash between 2 or 3 days. Resorting to an aggressive log cleaning script has mitigated the problem for us but without proper error logs or reasons for the crash it hard to be definite that we are safe. This is our airflow.cfg logging config, we aren't doing anything radical just storing in a bucket. ``` [logging] # Airflow can store logs remotely in AWS S3, Google Cloud Storage or Elastic Search. # Users must supply an Airflow connection id that provides access to the storage # location. If remote_logging is set to true, see UPDATING.md for additional # configuration requirements. # remote_logging = $ENABLE_REMOTE_LOGGING # remote_log_conn_id = s3conn # remote_base_log_folder = $LOGS_S3_BUCKET # encrypt_s3_logs = False remote_logging = True remote_log_conn_id = s3conn remote_base_log_folder = $AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER encrypt_s3_logs = False # Log format log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s # Logging level logging_level = INFO # Logging class # Specify the class that will specify the logging configuration # This class has to be on the python classpath logging_config_class = # The folder where airflow should store its log files # This path must be absolute base_log_folder = /usr/local/airflow/logs # Name of handler to read task instance logs. # Default to use file task handler. # task_log_reader = file.task task_log_reader = task ``` Here is a memory usage graph of a crashed worker pod, the flat line is when it is in a crashed state and then restarted. There is also a big cliff on the right of the graph at about 0900 on June 29th where I manually cleaned the log files from the disk. ![Crashed airflow worker](https://i.imgur.com/mO2ecQO.png) The last few log lines before it crashed: ``` Jun 25, 2021 @ 04:28:01.831 | [2021-06-25 03:28:01,830: INFO/MainProcess] Received task: airflow.executors.celery_executor.execute_command[5f802ffb-d5af-40ae-9e99-5e0501bf7d1c]  Jun 25, 2021 @ 04:27:36.769 | [2021-06-25 03:27:36,769: INFO/MainProcess] Received task: airflow.executors.celery_executor.execute_command[737d4310-c6ae-450f-889a-ffee53e94d33]   Jun 25, 2021 @ 04:27:25.565 | [2021-06-25 03:27:25,564: WARNING/ForkPoolWorker-13] Running <TaskInstance: a_task_name 2021-06-25T02:18:00+00:00 [queued]> on host airflow-worker-3.airflow-worker.airflow.svc.cluster.local   Jun 25, 2021 @ 04:27:25.403 | [2021-06-25 03:27:25,402: INFO/ForkPoolWorker-13] Filling up the DagBag from /usr/local/airflow/dags/a_dag.py   Jun 25, 2021 @ 04:27:25.337 | [2021-06-25 03:27:25,337: INFO/ForkPoolWorker-13] Executing command in Celery: ['airflow', 'tasks', 'run', 'task_name_redacted', 'task, '2021-06-25T02:18:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/usr/local/airflow/dags/a_dag.py']   Jun 25, 2021 @ 04:27:25.327 | [2021-06-25 03:27:25,326: INFO/ForkPoolWorker-13] Task airflow.executors.celery_executor.execute_command[4d9ee684-4ae3-41d2-8a00-e8071179a1b1] succeeded in 5.212706514168531s: None   Jun 25, 2021 @ 04:27:24.980 | [2021-06-25 03:27:24,979: INFO/ForkPoolWorker-13] role_arn is None   Jun 25, 2021 @ 04:27:24.968 | [2021-06-25 03:27:24,968: INFO/ForkPoolWorker-13] No credentials retrieved from Connection   Jun 25, 2021 @ 04:27:24.968 | [2021-06-25 03:27:24,968: INFO/ForkPoolWorker-13] Creating session with aws_access_key_id=None region_name=None   Jun 25, 2021 @ 04:27:24.954 | [2021-06-25 03:27:24,953: INFO/ForkPoolWorker-13] Airflow Connection: aws_conn_id=s3conn   Jun 25, 2021 @ 04:27:20.610 | [2021-06-25 03:27:20,610: WARNING/ForkPoolWorker-13] Running <TaskInstance: task_name_redacted 2021-06-25T03:10:00+00:00 [queued]> on host airflow-worker-3.airflow-worker.airflow.svc.cluster.local ```
https://github.com/apache/airflow/issues/16703
https://github.com/apache/airflow/pull/30112
869c1e3581fa163bbaad11a2d5ddaf8cf433296d
e09d00e6ab444ec323805386c2056c1f8a0ae6e7
"2021-06-29T09:48:11Z"
python
"2023-03-17T15:08:45Z"
closed
apache/airflow
https://github.com/apache/airflow
16,669
["airflow/providers/tableau/hooks/tableau.py", "airflow/providers/tableau/operators/tableau_refresh_workbook.py", "airflow/providers/tableau/sensors/tableau_job_status.py", "docs/apache-airflow-providers-tableau/connections/tableau.rst", "tests/providers/tableau/operators/test_tableau_refresh_workbook.py"]
TableauRefreshWorkbookOperator fails when using personal access token (Tableau authentication method)
**Apache Airflow version**: 2.0.1 **What happened**: The operator fails at the last step, after successfully refreshing the workbook with this error: ``` tableauserverclient.server.endpoint.exceptions.ServerResponseError: 401002: Unauthorized Access Invalid authentication credentials were provided. ``` **What you expected to happen**: It should not fail, like when we use the username/password authentication method (instead of personal_access_token) <!-- What do you think went wrong? --> Tableau server does not allow concurrent connections when using personal_access_token https://github.com/tableau/server-client-python/issues/717 The solution would be redesigning completely the operator to only call the hook once. My quick fix was to edit this in TableauHook: ``` def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None: pass ``` **How to reproduce it**: Run this operator TableauRefreshWorkbookOperator using Tableau personal_access_token authentication (token_name, personal_access_token).
https://github.com/apache/airflow/issues/16669
https://github.com/apache/airflow/pull/16916
cc33d7e513e0f66a94a6e6277d6d30c08de94d64
53246ebef716933f71a28901e19367d84b0daa81
"2021-06-25T23:09:23Z"
python
"2021-07-15T06:29:31Z"
closed
apache/airflow
https://github.com/apache/airflow
16,646
["airflow/www/static/js/tree.js"]
Tree view - Skipped tasks showing Duration
**Apache Airflow version**: 2.1.0 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" - **Kernel** (e.g. `uname -a`): Linux airflow-scheduler-647d744f9-4zx2n 4.14.138-rancher #1 SMP Sat Aug 10 11:25:46 UTC 2019 x86_64 GNU/Linux - **Install tools**: - **Others**: **What happened**: When using BranchPythonOperator , skipped tasks show a `Duration` value (even if DAG is already completed). In comparison , same task shows no Duration at Graph View. Example: ![2021-06-24 20_53_35-skipped_showing_duration_](https://user-images.githubusercontent.com/10963531/123357208-c24d7480-d52e-11eb-9a7c-d741f477c4d8.png) ![2021-06-24 20_53_35-skipped_graph_view](https://user-images.githubusercontent.com/10963531/123357562-59b2c780-d52f-11eb-9b43-759160722fac.png) Actually Duration time keeps increasing after checking again same task instance: ![image](https://user-images.githubusercontent.com/10963531/123357635-849d1b80-d52f-11eb-8eda-c397aa925ef2.png) 55 Min 50 sec vs 1Hours 4 mins **What you expected to happen**: Duration value should be empty (like in Graph view) **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/16646
https://github.com/apache/airflow/pull/16695
98c12d49f37f6879e3e9fd926853f57a15ab761b
f0b3345ddc489627d73d190a1401804e7b0d9c4e
"2021-06-25T02:06:10Z"
python
"2021-06-28T15:23:19Z"
closed
apache/airflow
https://github.com/apache/airflow
16,635
["airflow/models/baseoperator.py", "docs/spelling_wordlist.txt", "tests/models/test_baseoperator.py"]
Update `airflow.models.baseoperator.chain()` function to support XComArgs
**Description** The `airflow.models.baseoperator.chain()` is a very useful and convenient way to add sequential task dependencies in DAGs but the function only supports tasks of a `BaseOperator` type. **Use case / motivation** Users who create tasks via the `@task` decorator will not be able to use the `chain()` function to apply sequential dependencies that do not share an `XComArg` implicit dependency. **Are you willing to submit a PR?** Absolutely. 🚀 **Related Issues** None
https://github.com/apache/airflow/issues/16635
https://github.com/apache/airflow/pull/16732
9f8f81f27d367fcde171173596f1f30a3a7069f8
7529546939250266ccf404c2eea98b298365ef46
"2021-06-24T15:30:09Z"
python
"2021-07-14T07:43:41Z"
closed
apache/airflow
https://github.com/apache/airflow
16,614
["airflow/www/package.json", "airflow/www/yarn.lock"]
Connection password not being masked in default logging
``` from airflow.hooks.base_hook import BaseHook Basehook.get_connection('my_connection_id') ``` The second line prints out my connection details including the connection password in Airflow logs. Earlier connection passwords were masked by default. https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/hooks/base.html The above statement is run for logging. Is there a way to disable to logging to not print Connection password in my logs?
https://github.com/apache/airflow/issues/16614
https://github.com/apache/airflow/pull/30112
869c1e3581fa163bbaad11a2d5ddaf8cf433296d
e09d00e6ab444ec323805386c2056c1f8a0ae6e7
"2021-06-23T12:17:39Z"
python
"2023-03-17T15:08:45Z"
closed
apache/airflow
https://github.com/apache/airflow
16,611
["airflow/kubernetes/pod_generator.py", "tests/kubernetes/models/test_secret.py", "tests/kubernetes/test_pod_generator.py"]
Pod name with period is causing issues for some apps in k8s
**Apache Airflow version**: 2.0.0+ **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): All versions affected **Environment**: It affects all possible host configurations. The issue impacts KubernetesPodOperator. What scripts will actually be affected inside of KubernetesPodOperator container - big question. In my scenario it was locally executed **Apache Spark**. **What happened**: This issue is consequence of change that was introduced in this commit/line: https://github.com/apache/airflow/commit/862443f6d3669411abfb83082c29c2fad7fcf12d#diff-01764c9ba7b2270764a59e7ff281c95809071b1f801170ee75a02481a8a730aaR475 Pod operator is generating a pod name that has period in it. Whatever pod name is picked gets inherited by container itself, as result it becomes a hostname of it. The problem of hostnames in Linux is that if it contains period, it immediately becomes assumed a valid domain that DNS should be able to resolve. md5 digest in this weird case becomes assumed as first level "domain". Obviously, some libraries have no idea what to do with DNS domain like `airflow-pod-operator.9b702530e25f40f2b1cf6220842280c`, so they throw exceptions (either Unknown host, Unable to resolve hostname or such) In my use case the component was barking was **Apache Spark** in local mode. Error line is referring to Spark URL: > 21/05/21 11:20:01 ERROR SparkApp$: org.apache.spark.SparkException: Invalid Spark URL: spark://HeartbeatReceiver@airflow-pod-operator.9b702530e25e30f2b1cf1622082280c:38031 **What you expected to happen**: Apache Spark just works without issues and able to resolve itself by hostname without any code changes. **How to reproduce it**: As I'm not certain about full list of affected applications, I would for now assume anything that tries to resolve "current pod's" hostname. In my scenario I was running Wordcount of Apache Spark in local mode in KubernetesPodOperator. Perhaps, there might be easier ways to replicate it. **Anything else we need to know**: Having this kind of unresolvable real domain vs hostname confusion in my opinion is very very bad, and should be discouraged. The way for me to mitigate this issue right now was to build my own inheritance for KubernetesPodOperator that has method `create_pod_request_obj` overriden to call older method of generation of unique identifiers for pod name that used `-` (hyphen) instead of `.` (period) notation in name.
https://github.com/apache/airflow/issues/16611
https://github.com/apache/airflow/pull/19036
121e1f197ac580ea4712b7a0e72b02cf7ed9b27a
563315857d1f54f0c56059ff38dc6aa9af4f08b7
"2021-06-23T03:48:01Z"
python
"2021-11-30T05:00:06Z"
closed
apache/airflow
https://github.com/apache/airflow
16,610
["airflow/www/static/js/dag_dependencies.js"]
Dag Dependency page not showing anything
**Apache Airflow version**: 2.1. **Environment**: Ubuntu 20.04 - **Cloud provider or hardware configuration**: AWS - **OS** (e.g. from /etc/os-release): UBUNTU 20.04 LTS - **Kernel** (e.g. `uname -a`): Linux 20.04.1-Ubuntu SMP Tue Jun 1 09:54:15 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux - **Install tools**: python and pip **What happened**: After performing the upgrade from 2.0.2 to 2.10 using the guide available in the documentation, Airflow upgraded successfully, Dag dependency page isn't working as expected. The DAG dependency page doesn't show the dependency graph. <!-- (please include exact error messages if you can) --> **What you expected to happen**: I expected the dag dependency page to show the dags and their dependency in a Graph view <!-- What do you think went wrong? --> **How to reproduce it**: Its reproduced by opening these pages every time. ![Dag Dependency Page](https://user-images.githubusercontent.com/43160555/123028222-6eee0000-d422-11eb-9664-a8c0ee2a6723.png) How often does this problem occur? Once? Every time etc? Every time Any relevant logs to include? Put them here in side a detail tag: <details><summary>Upgrade Check Log</summary> /home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/configuration.py:34 6 DeprecationWarning: The hide_sensitive_variable_fields option in [admin] has been moved to the hide_sensitive_var_conn_fields option in [core] - the old setting has been used, but please update your config. /home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/configuration.py:34 6 DeprecationWarning: The default_queue option in [celery] has been moved to the default_queue option in [operators] - the old setting has been used, but please update your config. /home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/plugins_manager.py: 239 DeprecationWarning: This decorator is deprecated.In previous versions, all subclasses of BaseOperator must use apply_default decorator for the`default_args` feature to work properly. In current version, it is optional. The decorator is applied automatically using the metaclass. /home/ubuntu/env_airflow/lib/python3.8/site-packages/airflow/configuration.py:34 6 DeprecationWarning: The default_queue option in [celery] has been moved to the default_queue option in [operators] - the old setting has been used, but please update your config. </details>
https://github.com/apache/airflow/issues/16610
https://github.com/apache/airflow/pull/24166
7e56bf662915cd58849626d7a029a4ba70cdda4d
3e51d8029ba34d3a76b3afe53e257f1fb5fb9da1
"2021-06-23T03:42:06Z"
python
"2022-06-07T11:25:31Z"
closed
apache/airflow
https://github.com/apache/airflow
16,587
["airflow/www/auth.py", "airflow/www/security.py", "airflow/www/templates/airflow/no_roles_permissions.html", "airflow/www/views.py", "tests/www/test_security.py", "tests/www/views/test_views_acl.py", "tests/www/views/test_views_base.py"]
Users with Guest role stuck in redirect loop upon login
Airflow 2.1.0, Docker **What happened**: Users with the Guest role assigned are stuck in a redirect loop once they attempt to login successfully to the web interface. **What you expected to happen**: Get minimal access to the dashboard with the appropriate views for a guest role **How to reproduce it**: 1. Assign a guest role to any user and remove any other roles with the administrator user. 2. Logout from the admin account 3. Login as the guest user 4. You will notice constant HTTP redirects, and the dashboard will not show up.
https://github.com/apache/airflow/issues/16587
https://github.com/apache/airflow/pull/17838
933d863d6d39198dee40bd100658aa69e95d1895
e18b6a6d19f9ea0d8fe760ba00adf38810f0e510
"2021-06-22T12:57:03Z"
python
"2021-08-26T20:59:30Z"
closed
apache/airflow
https://github.com/apache/airflow
16,573
["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"]
State of this instance has been externally set to up_for_retry. Terminating instance.
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.18.14 Environment: Cloud provider or hardware configuration: Azure OS (e.g. from /etc/os-release): Kernel (e.g. uname -a): Install tools: Others: **What happened**: An occasional airflow tasks fails with the following error ``` [2021-06-21 05:39:48,424] {local_task_job.py:184} WARNING - State of this instance has been externally set to up_for_retry. Terminating instance. [2021-06-21 05:39:48,425] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 259 [2021-06-21 05:39:48,426] {taskinstance.py:1238} ERROR - Received SIGTERM. Terminating subprocesses. [2021-06-21 05:39:48,426] {bash.py:185} INFO - Sending SIGTERM signal to bash process group [2021-06-21 05:39:49,133] {process_utils.py:66} INFO - Process psutil.Process(pid=329, status='terminated', started='04:32:14') (329) terminated with exit code None [2021-06-21 05:39:50,278] {taskinstance.py:1454} ERROR - Task received SIGTERM signal Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1284, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1309, in _execute_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.7/site-packages/airflow/operators/bash.py", line 171, in execute for raw_line in iter(self.sub_process.stdout.readline, b''): File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1240, in signal_handler raise AirflowException("Task received SIGTERM signal") airflow.exceptions.AirflowException: Task received SIGTERM signal ``` There is no indication as to what caused this error. The worker instance is healthy and task did not hit the task timeout. **What you expected to happen**: Task to complete successfully. If a task fad to fail for unavoidable reason (like timeout), it would be helpful to provide the reason for the failure. **How to reproduce it**: I'm not able to reproduce it consistently. It happens every now and then with the same error as provided above. I'm also wish to know how to debug these failures
https://github.com/apache/airflow/issues/16573
https://github.com/apache/airflow/pull/19375
e57c74263884ad5827a5bb9973eb698f0c269cc8
38d329bd112e8be891f077b4e3300182930cf74d
"2021-06-21T20:28:21Z"
python
"2021-11-03T06:45:41Z"
closed
apache/airflow
https://github.com/apache/airflow
16,564
["airflow/models/taskinstance.py"]
No more SQL Exception in 2.1.0
**Apache Airflow version**: 2.1.0 **Environment**: - self hosted docker-compose based stack **What happened**: Using JDBCOperator if the sql result in an error we got only ``` [2021-06-21 11:05:55,377] {local_task_job.py:151} INFO - Task exited with return code 1 ``` Before upgrading from 2.0.1 we got error details in logs: ``` jaydebeapi.DatabaseError: java.sql.SQLException:... [MapR][DrillJDBCDriver](500165) Query execution error. Details: VALIDATION ERROR:... ``` **What you expected to happen**: See `SQLException` in logs **How to reproduce it**: Perform a generic SQL task with broken SQL. **Anything else we need to know**: I think is somehow related to https://github.com/apache/airflow/commit/abcd48731303d9e141bdc94acc2db46d73ccbe12#diff-4fd3febb74d94b2953bf5e9b4a981b617949195f83d96f4a589c3078085959b7R202
https://github.com/apache/airflow/issues/16564
https://github.com/apache/airflow/pull/16805
b5ef3c841f735ea113e5d3639a620c2b63092e43
f40ade4643966b3e78493589c5459ca2c01db0c2
"2021-06-21T13:15:55Z"
python
"2021-07-06T19:19:27Z"
closed
apache/airflow
https://github.com/apache/airflow
16,533
["docs/exts/docs_build/fetch_inventories.py"]
Documentation building fails if helm-chart is not being built
When you've never built `helm-chart` documentation package locally. the intersphinx repository is missing for it and it cannot be downloaded as the helm-chart package is never published as package (I guess) . This fails for example our command to build provider's documentation when you release providers: ``` cd "${AIRFLOW_REPO_ROOT}" ./breeze build-docs -- \ --for-production \ --package-filter apache-airflow-providers \ --package-filter 'apache-airflow-providers-*' ``` Adding `--package-filter 'helm-chart'` helps, but it also builds the helm-chart documentation which is undesired in this case (and it causes the docs-building for most providers to fail the first pass, until the `helm-chart` documentation is built Possibly there is a way to get rid of that dependency ? The error you get: ``` apache-airflow-providers Traceback (most recent call last): apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/cmd/build.py", line 279, in build_main apache-airflow-providers args.tags, args.verbosity, args.jobs, args.keep_going) apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/application.py", line 278, in __init__ apache-airflow-providers self._init_builder() apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/application.py", line 337, in _init_builder apache-airflow-providers self.events.emit('builder-inited') apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/events.py", line 110, in emit apache-airflow-providers results.append(listener.handler(self.app, *args)) apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/ext/intersphinx.py", line 238, in load_mappings apache-airflow-providers updated = apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/ext/intersphinx.py", line 238, in <listcomp> apache-airflow-providers updated = apache-airflow-providers File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 425, in result apache-airflow-providers return self.__get_result() apache-airflow-providers File "/usr/local/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result apache-airflow-providers raise self._exception apache-airflow-providers File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run apache-airflow-providers result = self.fn(*self.args, **self.kwargs) apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/ext/intersphinx.py", line 224, in fetch_inventory_group apache-airflow-providers "with the following issues:") + "\n" + issues) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1642, in warning apache-airflow-providers self.log(WARNING, msg, *args, **kwargs) apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/util/logging.py", line 126, in log apache-airflow-providers super().log(level, msg, *args, **kwargs) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1674, in log apache-airflow-providers self.logger.log(level, msg, *args, **kwargs) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1374, in log apache-airflow-providers self._log(level, msg, args, **kwargs) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1444, in _log apache-airflow-providers self.handle(record) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1454, in handle apache-airflow-providers self.callHandlers(record) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 1516, in callHandlers apache-airflow-providers hdlr.handle(record) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 861, in handle apache-airflow-providers rv = self.filter(record) apache-airflow-providers File "/usr/local/lib/python3.6/logging/__init__.py", line 720, in filter apache-airflow-providers result = f.filter(record) apache-airflow-providers File "/usr/local/lib/python3.6/site-packages/sphinx/util/logging.py", line 422, in filter apache-airflow-providers raise exc apache-airflow-providers sphinx.errors.SphinxWarning: failed to reach any of the inventories with the following issues: apache-airflow-providers intersphinx inventory '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv' not fetchable due to <class 'FileNotFoundError'>: [Errno 2] No such file or directory: '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv' apache-airflow-providers apache-airflow-providers [91mWarning, treated as error:[39;49;00m apache-airflow-providers failed to reach any of the inventories with the following issues: apache-airflow-providers intersphinx inventory '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv' not fetchable due to <class 'FileNotFoundError'>: [Errno 2] No such file or directory: '/opt/airflow/docs/_inventory_cache/helm-chart/objects.inv' ```
https://github.com/apache/airflow/issues/16533
https://github.com/apache/airflow/pull/16535
28e285ef9a4702b3babf6ed3c094af07c017581f
609620a39c79dc410943e5fcce0425f6ef32cd3e
"2021-06-18T18:56:03Z"
python
"2021-06-19T01:20:32Z"
closed
apache/airflow
https://github.com/apache/airflow
16,520
["airflow/hooks/dbapi.py", "airflow/providers/postgres/hooks/postgres.py", "tests/hooks/test_dbapi.py", "tests/providers/postgres/hooks/test_postgres.py"]
DbApiHook.get_uri() doesn't follow PostgresHook schema argument
**Apache Airflow version**: **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: `get_uri()` and `get_sqlalchemy_engine()` had not been implemented in PostgresHook. When using `PostgresHook('CONNECTION_NAME', schema='another_schema').get_sqlalchemy_engine()` that will still use connection default schema setting through `get_uri()`, instead of schema that is assigned to `PostgresHook()`. **What you expected to happen**: `get_uri()` should follow schema in PostgresHook. **How to reproduce it**: `PostgresHook('CONNECTION_NAME', schema='another_schema'). get_uri()` **Anything else we need to know**:
https://github.com/apache/airflow/issues/16520
https://github.com/apache/airflow/pull/16521
86c20910aed48f7d5b2ebaa91fa40d47c52d7db3
3ee916e9e11f0e9d9c794fa41b102161df3f2cd4
"2021-06-18T03:38:10Z"
python
"2021-06-23T18:54:05Z"
closed
apache/airflow
https://github.com/apache/airflow
16,502
["airflow/providers/amazon/aws/hooks/athena.py", "tests/providers/amazon/aws/hooks/test_athena.py"]
Feature: Method in AWSAthenaHook to get output URI from S3
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** Athena is commonly used service amongst data engineers where one might need to get location of CSV result file in S3. This method will return S3 URI of CSV result on athena aquery. **Use case / motivation** Current implementation of [AWSAthenaHook](https://airflow.apache.org/docs/apache-airflow/1.10.12/_modules/airflow/contrib/hooks/aws_athena_hook.html) has methods to get result data as list of dictionaries, which is not always desired. Instead S3 URI of CSV file is more apt when one have many rows in the CSV file. One can use the S3 URI to process the data further(at some other service like Batch) <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> This method shall let the used get S3 URI of result CSV file of athena query.(If there some way to add some method to get S3 file URI in `AWSAthenaOperator` then it can be helpful as well) **Are you willing to submit a PR?** Yes <!--- We accept contributions! --> **Related Issues** <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/16502
https://github.com/apache/airflow/pull/20124
70818319a038f1d17c179c278930b5b85035085d
0e2a0ccd3087f53222e7859f414daf0ffa50dfbb
"2021-06-17T10:55:48Z"
python
"2021-12-08T20:38:27Z"
closed
apache/airflow
https://github.com/apache/airflow
16,500
["airflow/www/static/js/dag.js", "airflow/www/static/js/dags.js"]
Pause/Unpause DAG tooltip doesn't disappear after click
**Apache Airflow version**: 2.2.0dev **What happened**: The on/off toggle shows a tooltip "Pause/Unpause DAG" when hovering over it. This works as expected. However if clicking on the toggle the tooltip will stick until clicking elsewhere in the screen. **What you expected to happen**: The tooltip should disappear when user isn't hovering the button. **How to reproduce it**: ![2021-06-17_11-48-15](https://user-images.githubusercontent.com/45845474/122364575-8c265880-cf62-11eb-834f-4fa3404b3f4b.gif)
https://github.com/apache/airflow/issues/16500
https://github.com/apache/airflow/pull/17957
9c19f0db7dd39103ac9bc884995d286ba8530c10
ee93935bab6e5841b48a07028ea701d9aebe0cea
"2021-06-17T08:57:10Z"
python
"2021-09-01T12:14:41Z"
closed
apache/airflow
https://github.com/apache/airflow
16,493
["airflow/www/static/js/connection_form.js"]
UI: Port is not an integer error on Connection Test
When adding the port in the Webserver it error as it considers the port as a string instead of int ![image](https://user-images.githubusercontent.com/8811558/122307294-f65ae100-cf01-11eb-80fd-1623692c8ff3.png) Error in Webserver: ``` [2021-06-16 22:56:33,430] {validation.py:204} ERROR - http://localhost:28080/api/v1/connections/test validation error: '25433' is not of type 'integer' - 'port' ``` cc @msumit
https://github.com/apache/airflow/issues/16493
https://github.com/apache/airflow/pull/16497
1c82b4d015a1785a881bb916ffa0265249c2cde7
e72e5295fd5e710599bc0ecc9a70b0b3b5728f38
"2021-06-16T23:22:07Z"
python
"2021-06-17T11:39:26Z"
closed
apache/airflow
https://github.com/apache/airflow
16,468
["docs/apache-airflow/howto/email-config.rst"]
SMTP connection type clarifications are needed
**Description** The documentation on how to set up STMP is not clear. https://airflow.apache.org/docs/apache-airflow/stable/howto/email-config.html It says to create a connection named `smtp_default` but does not say what type of connection to create. There is no connection type named `SMTP` It was suggested to me in Airflow Slack to create it as an `HTTP` connection, but this connection type does not contain all fields necessary to configure SMTP, particularly with TLS. **Use case / motivation** I would like the instructions for setting up SMTP in Airflow to be clearer, and it would make sense that there is an `SMTP` connection type with all necessary fields.
https://github.com/apache/airflow/issues/16468
https://github.com/apache/airflow/pull/16523
bbc627a3dab17ba4cf920dd1a26dbed6f5cebfd1
df1220a420b8fd7c6fcdcacc5345459c284acff2
"2021-06-15T21:17:14Z"
python
"2021-06-18T12:47:15Z"
closed
apache/airflow
https://github.com/apache/airflow
16,460
["airflow/cli/commands/dag_command.py"]
Typos in Backfill's `task-regex` param
Example: DAG structure: ``` default_args = { 'owner': 'dimon', 'depends_on_past': False, 'start_date': datetime(2021, 1, 10) } dag = DAG( 'dummy-dag', schedule_interval='21 2 * * *', catchup=False, default_args=default_args ) DagContext.push_context_managed_dag(dag) task1 = BashOperator(task_id='task1', bash_command='echo 1') task2 = BashOperator(task_id='task2', bash_command='echo 2') task2 << task1 task3 = BashOperator(task_id='task3', bash_command='echo 3') ``` Let’s say you’ve missed the button and typed `--task-regex task4`. When the backfill starts, firstly it will create a new empty DagRun and puts it in DB. Then the backfill job will go and try to find tasks that match the regex you’ve entered, will not find any obviously and will be stuck in the “running” state together with newly created DagRun forever.
https://github.com/apache/airflow/issues/16460
https://github.com/apache/airflow/pull/16461
bf238aa21da8c0716b251575216434bb549e64f0
f2c79b238f4ea3ee801038a6305b925f2f4e753b
"2021-06-15T14:11:59Z"
python
"2021-06-16T20:07:58Z"
closed
apache/airflow
https://github.com/apache/airflow
16,435
["airflow/www/static/css/main.css", "airflow/www/utils.py", "setup.cfg", "tests/www/test_utils.py"]
Switch Markdown engine to markdown-it-py
Copying from #16414: The current Markdown engine does not support [fenced code blocks](https://python-markdown.github.io/extensions/fenced_code_blocks/), so it still won’t work after this change. Python-Markdown’s fenced code support is pretty spotty, and if we want to fix that for good IMO we should switch to another Markdown parser. [markdown-it-py](https://github.com/executablebooks/markdown-it-py) (the parser backing [MyST](https://myst-parser.readthedocs.io/en/latest/using/intro.html)) is a popular choice for [CommonMark](https://commonmark.org/) support, which is much closer to [GitHub-Flavored Markdown](https://github.github.com/gfm/) which almost everyone thinks is the standard Markdown (which is unfortunately because GFM is not standard, but that’s how the world works).
https://github.com/apache/airflow/issues/16435
https://github.com/apache/airflow/pull/19702
904cc121b83ecfaacba25433a7911a2541b2c312
88363b543f6f963247c332e9d7830bc782ed6e2d
"2021-06-14T15:09:17Z"
python
"2022-06-21T09:24:13Z"
closed
apache/airflow
https://github.com/apache/airflow
16,367
["airflow/www/static/js/tree.js", "airflow/www/templates/airflow/tree.html"]
Tree view shown incorrect dag runs
Apache Airflow version: 2.1.0 On Tree view, switch to 50 Runs, and the view is broken: ![screenshot](https://user-images.githubusercontent.com/16779368/121520434-93e27c00-c9fb-11eb-8625-65c07a1ac770.png)
https://github.com/apache/airflow/issues/16367
https://github.com/apache/airflow/pull/16437
5c86e3d50970e61d0eabd0965ebdc7b5ecf3bf14
6087a09f89c7da4aac47eab3756a7fe24e3b602b
"2021-06-10T11:53:47Z"
python
"2021-06-14T20:02:35Z"
closed
apache/airflow
https://github.com/apache/airflow
16,364
["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "docs/apache-airflow-providers-ssh/connections/ssh.rst", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"]
Timeout is ambiguous in SSHHook and SSHOperator
In SSHHook the timeout argument of the constructor is used to set a connection timeout. This is fine. But in SSHOperator the timeout argument of the constructor is used for *both* the timeout of the SSHHook *and* the timeout of the command itself (see paramiko's ssh client exec_command use of the timeout parameter). This ambiguous use of the same parameter is very dirty. I see two ways to clean the behaviour: 1. Let the SSHHook constructor be the only way to handle the connection timeout (thus, if one wants a specific timeout they should explicitely build a hook to be passed to the operator using the operator's constructor). 2. Split the timeout argument in SSHOperator into two arguments conn_timeout and cmd_timeout for example. The choice between 1 and 2 depends on how frequently people are supposed to want to change the connection timeout. If it is something very frequent. then go for 2. if not go for 1. BR and thanks for the code!
https://github.com/apache/airflow/issues/16364
https://github.com/apache/airflow/pull/17236
0e3b06ba2f3898c938c3d191d0c2bc8d85c318c7
68d99bc5582b52106f876ccc22cc1e115a42b252
"2021-06-10T09:32:15Z"
python
"2021-09-10T13:16:15Z"
closed
apache/airflow
https://github.com/apache/airflow
16,359
["airflow/www/static/js/graph.js"]
Dag graph aligned at bottom when expanding a TaskGroup
**Apache Airflow version**: 2.1.0 **Kubernetes version (if you are using kubernetes)** : v1.17.5 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster) - **Kernel** (e.g. `uname -a`): Linux airflow-scheduler-5c6fcfbf9d-mh57k 4.14.138-rancher #1 SMP Sat Aug 10 11:25:46 UTC 2019 x86_64 GNU/Linux - **Install tools**: - **Others**: **What happened**: When expanding a TaskGroup, graph is placed at bottom ( it disappears from current display) . <!-- (please include exact error messages if you can) --> Graph collapsed placed at top: ![2021-06-07 11_36_52-collapsed](https://user-images.githubusercontent.com/10963531/121449654-db1b2f00-c95f-11eb-89df-a458aadad422.png) Graph at bottom when clicking on a TaskGroup: ![2021-06-07 11_36_52-expanded](https://user-images.githubusercontent.com/10963531/121449698-eec69580-c95f-11eb-8750-2e96b04f7628.png) **What you expected to happen**: Maintain graph at top , to avoid a scroll down. <!-- What do you think went wrong? -->
https://github.com/apache/airflow/issues/16359
https://github.com/apache/airflow/pull/16484
c158d4c5c4e2fa9eb476fd49b6db4781550986a5
f1675853a5ed9b779ee2fc13bb9aa97185472bc7
"2021-06-10T01:20:28Z"
python
"2021-06-16T18:20:19Z"
closed
apache/airflow
https://github.com/apache/airflow
16,326
["airflow/jobs/base_job.py", "tests/jobs/test_base_job.py"]
CeleryKubernetesExecutor is broken in 2.1.0
Tested with both chart 1.1.0rc1 (i.e. main branch r.n.) and 1.0.0 in Airflow 2.1.0, scheduler does not exit immediately (this was an issue < 2.1.0), but all tasks fail like this: ``` 2021-06-08 15:30:17,167] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: sqoop_acquisition.terminate_job_flow 2021-06-08 13:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally? [2021-06-08 15:30:17,170] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: gsheets.state_mapping.to_s3 2021-06-08 14:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally? [2021-06-08 15:30:17,171] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: gsheets.app_event_taxonomy.to_s3 2021-06-08 14:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally? [2021-06-08 15:30:17,172] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: gsheets.strain_flavors.to_s3 2021-06-08 14:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally? [2021-06-08 15:30:19,053] {scheduler_job.py:1205} INFO - Executor reports execution of reporting_8hr.dev.cannalytics.feature_duration.sql execution_date=2021-06-08 07:00:00+00:00 exited with status failed for try_number 1 [2021-06-08 15:30:19,125] {scheduler_job.py:1241} ERROR - Executor reports task instance <TaskInstance: reporting_8hr.dev.cannalytics.feature_duration.sql 2021-06-08 07:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally? [2021-06-08 15:30:23,842] {dagrun.py:429} ERROR - Marking run <DagRun gsheets @ 2021-06-08 14:00:00+00:00: scheduled__2021-06-08T14:00:00+00:00, externally triggered: False> failed ``` @kaxil @jedcunningham do you see this when you run CKE? Any suggestions?
https://github.com/apache/airflow/issues/16326
https://github.com/apache/airflow/pull/16700
42b74a7891bc17fed0cf19e1c7f354fdcb3455c9
7857a9bde2e189881f87fe4dc0cdce7503895c03
"2021-06-08T14:36:18Z"
python
"2021-06-29T22:39:34Z"
closed
apache/airflow
https://github.com/apache/airflow
16,310
["airflow/utils/db.py", "airflow/utils/session.py"]
Enable running airflow db init in parallel
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Not applicable **Environment**: - **Cloud provider or hardware configuration**: None - **OS** (e.g. from /etc/os-release): Ubuntu - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: 1. Ran airflow db init on mysql in parallel in 2 command lines. Only one command did the migrations, the other one waited. But connections were inserted twice. I would like them not to be added twice. 2. Ran airflow db init on postgres in parallel in 2 command lines. Both command lines started doing migrations on the same db in parallel. I would like one command to run, the other to wait. **What you expected to happen**: 1. For MySQL. Connections and other config objects to be inserted only once. 2. For Postgres. Only one migration can be performed in the same time for the same db. **How to reproduce it**: Scenario 1: Setup Airflow so that it uses MySQL. Run `airflow init db` in 2 command lines, side by side. Scenario 2: Setup Airflow so that it uses Postgres. Run `airflow init db` in 2 command lines, side by side. **Anything else we need to know**: This problem occurs every time.
https://github.com/apache/airflow/issues/16310
https://github.com/apache/airflow/pull/17078
24d02bfa840ae2a315af4280b2c185122e3c30e1
fbc945d2a2046feda18e7a1a902a318dab9e6fd2
"2021-06-07T15:05:41Z"
python
"2021-07-19T09:51:35Z"
closed
apache/airflow
https://github.com/apache/airflow
16,306
["airflow/providers/tableau/hooks/tableau.py", "docs/apache-airflow-providers-tableau/connections/tableau.rst", "tests/providers/tableau/hooks/test_tableau.py"]
Tableau connection - Flag to disable SSL
**Description** To add a new flag to be able to disable SSL in Tableau connection( {"verify": "False"}? ) as it is not present in the current version, apache-airflow-providers-tableau 1.0.0 **Use case / motivation** Unable to disable SSL in Tableau connection and therefore unable to use the TableauRefreshWorkbook operator **Are you willing to submit a PR?** NO **Related Issues** NO
https://github.com/apache/airflow/issues/16306
https://github.com/apache/airflow/pull/16365
fc917af8b49a914d4404faebbec807679f0626af
df0746e133ca0f54adb93257c119dd550846bb89
"2021-06-07T14:02:34Z"
python
"2021-07-10T11:34:29Z"
closed
apache/airflow
https://github.com/apache/airflow
16,295
["airflow/utils/log/secrets_masker.py"]
JDBC operator not logging errors
Hi, Since Airflow 2.0, we are having issues with logging for the JDBC operator. When such a tasks fails, we only see `INFO - Task exited with return code 1` The actual error and stack trace is not present. It also seems to not try to execute it again, it only tries once even though my max_tries is 3. I am using a Local Executor, and logs are also stored locally. This issue occurs for both local installations and Docker. full log: ``` *** Reading local file: /home/stijn/airflow/logs/airflow_incr/fmc_mtd/2021-06-01T15:00:00+00:00/1.log [2021-06-01 18:00:13,389] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: airflow_incr.fmc_mtd 2021-06-01T15:00:00+00:00 [queued]> [2021-06-01 18:00:13,592] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: airflow_incr.fmc_mtd 2021-06-01T15:00:00+00:00 [queued]> [2021-06-01 18:00:13,592] {taskinstance.py:1067} INFO - -------------------------------------------------------------------------------- [2021-06-01 18:00:13,592] {taskinstance.py:1068} INFO - Starting attempt 1 of 4 [2021-06-01 18:00:13,593] {taskinstance.py:1069} INFO - -------------------------------------------------------------------------------- [2021-06-01 18:00:13,975] {taskinstance.py:1087} INFO - Executing <Task(JdbcOperator): fmc_mtd> on 2021-06-01T15:00:00+00:00 [2021-06-01 18:00:13,980] {standard_task_runner.py:52} INFO - Started process 957 to run task [2021-06-01 18:00:13,983] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'airflow_incr', 'fmc_mtd', '2021-06-01T15:00:00+00:00', '--job-id', '2841', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/100_FL_DAG_airflow_incr_20210531_122511.py', '--cfg-path', '/tmp/tmp67h9tgso', '--error-file', '/tmp/tmp4w35rr0g'] [2021-06-01 18:00:13,990] {standard_task_runner.py:77} INFO - Job 2841: Subtask fmc_mtd [2021-06-01 18:00:15,336] {logging_mixin.py:104} INFO - Running <TaskInstance: airflow_incr.fmc_mtd 2021-06-01T15:00:00+00:00 [running]> on host DESKTOP-VNC70B9.localdomain [2021-06-01 18:00:17,757] {taskinstance.py:1282} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=Vaultspeed AIRFLOW_CTX_DAG_ID=airflow_incr AIRFLOW_CTX_TASK_ID=fmc_mtd AIRFLOW_CTX_EXECUTION_DATE=2021-06-01T15:00:00+00:00 AIRFLOW_CTX_DAG_RUN_ID=scheduled__2021-06-01T15:00:00+00:00 [2021-06-01 18:00:17,757] {jdbc.py:70} INFO - Executing: ['INSERT INTO "moto_fmc"."fmc_loading_history" \n\t\tSELECT \n\t\t\t\'airflow_incr\',\n\t\t\t\'airflow\',\n\t\t\t35,\n\t\t\tTO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar),\n\t\t\t"fmc_begin_lw_timestamp" + -15 * interval\'1 minute\',\n\t\t\tTO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar),\n\t\t\tTO_TIMESTAMP(\'2021-06-01 15:59:59.210732\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar),\n\t\t\tnull,\n\t\t\tnull\n\t\tFROM (\n\t\t\tSELECT MAX("fmc_end_lw_timestamp") as "fmc_begin_lw_timestamp" \n\t\t\tFROM "moto_fmc"."fmc_loading_history" \n\t\t\tWHERE "src_bk" = \'airflow\' \n\t\t\tAND "success_flag" = 1\n\t\t\tAND "load_cycle_id" < 35\n\t\t) SRC_WINDOW\n\t\tWHERE NOT EXISTS(SELECT 1 FROM "moto_fmc"."fmc_loading_history" WHERE "load_cycle_id" = 35)', 'TRUNCATE TABLE "airflow_mtd"."load_cycle_info" ', 'INSERT INTO "airflow_mtd"."load_cycle_info"("load_cycle_id","load_date") \n\t\t\tSELECT 35,TO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar)', 'TRUNCATE TABLE "airflow_mtd"."fmc_loading_window_table" ', 'INSERT INTO "airflow_mtd"."fmc_loading_window_table"("fmc_begin_lw_timestamp","fmc_end_lw_timestamp") \n\t\t\tSELECT "fmc_begin_lw_timestamp" + -15 * interval\'1 minute\', TO_TIMESTAMP(\'2021-06-01 16:00:00.000000\', \'YYYY-MM-DD HH24:MI:SS.US\'::varchar)\n\t\t\tFROM (\n\t\t\t\tSELECT MAX("fmc_end_lw_timestamp") as "fmc_begin_lw_timestamp" \n\t\t\t\tFROM "moto_fmc"."fmc_loading_history" \n\t\t\t\tWHERE "src_bk" = \'airflow\' \n\t\t\t\tAND "success_flag" = 1\n\t\t\t\tAND "load_cycle_id" < 35\n\t\t\t) SRC_WINDOW'] [2021-06-01 18:00:18,097] {base.py:78} INFO - Using connection to: id: test_dv. Host: jdbc:postgresql://localhost:5432/test_dv_stijn, Port: None, Schema: , Login: postgres, Password: ***, extra: {'extra__jdbc__drv_path': '/home/stijn/airflow/jdbc/postgresql-9.4.1212.jar', 'extra__jdbc__drv_clsname': 'org.postgresql.Driver', 'extra__google_cloud_platform__project': '', 'extra__google_cloud_platform__key_path': '', 'extra__google_cloud_platform__keyfile_dict': '', 'extra__google_cloud_platform__scope': '', 'extra__google_cloud_platform__num_retries': 5, 'extra__grpc__auth_type': '', 'extra__grpc__credential_pem_file': '', 'extra__grpc__scopes': '', 'extra__yandexcloud__service_account_json': '', 'extra__yandexcloud__service_account_json_path': '', 'extra__yandexcloud__oauth': '', 'extra__yandexcloud__public_ssh_key': '', 'extra__yandexcloud__folder_id': '', 'extra__kubernetes__in_cluster': False, 'extra__kubernetes__kube_config': '', 'extra__kubernetes__namespace': ''} [2021-06-01 18:00:18,530] {local_task_job.py:151} INFO - Task exited with return code 1 `
https://github.com/apache/airflow/issues/16295
https://github.com/apache/airflow/pull/21540
cb24ee9414afcdc1a2b0fe1ec0b9f0ba5e1bd7b7
bc1b422e1ce3a5b170618a7a6589f8ae2fc33ad6
"2021-06-07T08:52:12Z"
python
"2022-02-27T13:07:14Z"
closed
apache/airflow
https://github.com/apache/airflow
16,290
["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"]
Allow deleting existing spark application before creating new one via SparkKubernetesOperator in Kubernetes
airflow version: v2.0.2 **Description** calling SparkKubernetesOperator within DAG should delete spark application if such already exists before submitting a new one. **Use case / motivation** ``` t1 = SparkKubernetesOperator( task_id='spark_pi_submit', namespace="dummy", application_file="spark.yaml", kubernetes_conn_id="kubernetes", do_xcom_push=True, dag=dag, ) ``` After first successful run, next runs fail to submit spark application > airflow.exceptions.AirflowException: Exception when calling -> create_custom_object: (409) > Reason: Conflict > > {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"sparkapplications.sparkoperator.k8s.io "xxx" already exists","reason":"AlreadyExists","details":{"name":"xxx","group":"sparkoperator.k8s.io","kind":"sparkapplications"},"code":409} > **Expected Result** Delete existing spark application if such exists before submitting new one.
https://github.com/apache/airflow/issues/16290
https://github.com/apache/airflow/pull/21092
2bb69508d8d0248621ada682d1bdedef729bbcf0
3c5bc73579080248b0583d74152f57548aef53a2
"2021-06-06T17:46:11Z"
python
"2022-04-12T13:32:13Z"
closed
apache/airflow
https://github.com/apache/airflow
16,263
["airflow/www/utils.py", "tests/www/test_utils.py"]
Unable to use nested lists in DAG markdown documentation
**Apache Airflow version**: 2.0.2 **What happened**: Tried to use the following markdown as a `doc_md` string passed to a DAG ```markdown - Example - Nested List ``` It was rendered in the web UI as a single list with no nesting or indentation. **What you expected to happen**: I expected the list to display as a nested list with visible indentation. **How to reproduce it**: Try and pass a DAG a `doc_md` string of the above nested list. I think the bug will affect any markdown that relies on meaningful indentation (tabs or spaces)
https://github.com/apache/airflow/issues/16263
https://github.com/apache/airflow/pull/16414
15ff2388e8a52348afcc923653f85ce15a3c5f71
6f9c0ceeb40947c226d35587097529d04c3e3e59
"2021-06-04T05:36:05Z"
python
"2021-06-13T00:30:11Z"
closed
apache/airflow
https://github.com/apache/airflow
16,256
["chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml"]
Helm chart: Keda add minReplicaCount
**Description** Keda supports [minReplicaCount](https://keda.sh/docs/1.4/concepts/scaling-deployments/) (default value is 0). It would be great if the users would have the option in the helm chart to overwrite the default value. **Use case / motivation** Keda scales the workers to zero if there is no running DAG. The scaling is possible between 0-`maxReplicaCount` however we want the scaling between `minReplicaCount`-`maxReplicaCount ` **Are you willing to submit a PR?** Yes
https://github.com/apache/airflow/issues/16256
https://github.com/apache/airflow/pull/16262
7744f05997c1622678a8a7c65a2959c9aef07141
ef83f730f5953eff1e9c63056e32f633afe7d3e2
"2021-06-03T19:15:43Z"
python
"2021-06-05T23:35:44Z"
closed
apache/airflow
https://github.com/apache/airflow
16,238
["airflow/www/static/js/tree.js"]
Airflow Tree View for larger dags
Hi, Airflow Web UI shows nothing on tree view for larger dags (more than 100 tasks), although it's working fine for smaller dags. Anything that is needed to be configured in `airflow.cfg` to support larger dags in the UI? ![alt larger](https://user-images.githubusercontent.com/42420177/120632899-ed3e2e80-c482-11eb-8c96-40bf751377e7.png) Smaller Dag: ![alt smaller](https://user-images.githubusercontent.com/42420177/120633072-28d8f880-c483-11eb-8fad-ee3f59461894.png) **Apache Airflow version**: 2.1.0 (Celery) **Environment**: - **Cloud provider or hardware configuration**: `AWS EC2` - **OS** (e.g. from /etc/os-release): `Ubuntu 18.04` - **Kernel** (e.g. `uname -a`): `5.4.0-1045-aws` - **Install tools**: `pip` **What you expected to happen**: It is rendering correctly on `Airflow 1.10.13 (Sequential)` ![image](https://user-images.githubusercontent.com/42420177/120633643-cf24fe00-c483-11eb-9cce-09f07be38905.png) **How to reproduce it**: Create a sample dag with `>=` 100 tasks **Anything else we need to know**: The cli command for viewing dag tree is working correctly `airflow tasks list services_data_sync --tree`
https://github.com/apache/airflow/issues/16238
https://github.com/apache/airflow/pull/16522
6b0dfec01fd9fca7ab3be741d25528a303424edc
f9786d42f1f861c7a40745c00cd4d3feaf6254a7
"2021-06-03T10:57:07Z"
python
"2021-06-21T15:25:24Z"
closed
apache/airflow
https://github.com/apache/airflow
16,204
["airflow/sensors/external_task.py", "newsfragments/27190.significant.rst", "tests/sensors/test_external_task_sensor.py"]
ExternalTaskSensor does not fail when failed_states is set along with a execution_date_fn
**Apache Airflow version**: 2.x including main **What happened**: I am using an `execution_date_fn` in an `ExternalTaskSensor` that also sets `allowed_states=['success']` and `failed_states=['failed']`. When one of the N upstream tasks fails, the sensor will hang forever in the `poke` method because there is a bug in checking for failed_states. **What you expected to happen**: I would expect the `ExternalTaskSensor` to fail. I think this is due to a bug in the `poke` method where it should check if `count_failed > 0` as opposed to checking `count_failed == len(dttm_filter)`. I've created a fix locally that works for my case and have submitted a PR #16205 for it as reference. **How to reproduce it**: Create any `ExternalTaskSensor` that checks for `failed_states` and have one of the external DAGs tasks fail while others succeed. E.g. ``` ExternalTaskSensor( task_id='check_external_dag', external_dag_id='external_dag', external_task_id=None, execution_date_fn=dependent_date_fn, allowed_states=['success'], failed_states=['failed'], check_existence=True) ```
https://github.com/apache/airflow/issues/16204
https://github.com/apache/airflow/pull/27190
a504a8267dd5530923bbe2c8ec4d1b409f909d83
34e21ea3e49f1720652eefc290fc2972a9292d29
"2021-06-01T19:10:02Z"
python
"2022-11-10T09:20:32Z"
closed
apache/airflow
https://github.com/apache/airflow
16,202
["airflow/www/views.py", "tests/www/views/test_views_custom_user_views.py"]
Missing Show/Edit/Delete under Security -> Users in 2.1.0
**Apache Airflow version**: 2.1.0 **Browsers**: Chrome and Firefox **What happened**: Before upgrading to 2.1.0 ![before](https://user-images.githubusercontent.com/14293802/120359517-c1ca1100-c2d5-11eb-95ba-58ccc0a3ac37.png) After upgrading to 2.1.0 ![after](https://user-images.githubusercontent.com/14293802/120359528-c4c50180-c2d5-11eb-8e04-f34846ea2736.png) **What you expected to happen**: Show/Edit/Delete under Security -> Users are available <!-- What do you think went wrong? --> **How to reproduce it**: Go to Security -> Users (as an admin of course)
https://github.com/apache/airflow/issues/16202
https://github.com/apache/airflow/pull/17431
7dd11abbb43a3240c2291f8ea3981d393668886b
c1e2af4dd2bf868307caae9f2fa825562319a4f8
"2021-06-01T16:35:51Z"
python
"2021-08-09T14:46:05Z"
closed
apache/airflow
https://github.com/apache/airflow
16,148
["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"]
Downloading files from S3 broken in 2.1.0
**Apache Airflow version**: 2.0.2 and 2.1.0 **Environment**: - **Cloud provider or hardware configuration**: running locally - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): Darwin CSchillebeeckx-0589.local 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64 x86_64 - **Install tools**: pip - **Others**: Running everything in Docker including Redis and Celery **What happened**: I'm seeing issues with downloading files from S3 on 2.1.0; a file is created after download, however the file content is empty! **What you expected to happen**: Non-empty files :) **How to reproduce it**: The DAG I'm running: ```python # -*- coding: utf-8 -*- import os import logging from airflow import DAG from airflow.operators.python import PythonOperator from airflow.utils.dates import days_ago from airflow.providers.amazon.aws.hooks.s3 import S3Hook def download_file_from_s3(): # authed with ENVIRONMENT variables s3_hook = S3Hook() bucket = 'some-secret-bucket' key = 'tmp.txt' with open('/tmp/s3_hook.txt', 'w') as f: s3_hook.get_resource_type("s3").Bucket(bucket).Object(key).download_file(f.name) logging.info(f"File downloaded: {f.name}") with open(f.name, 'r') as f_in: logging.info(f"FILE CONTENT {f_in.read()}") dag = DAG( "tmp", catchup=False, default_args={ "start_date": days_ago(1), }, schedule_interval=None, ) download_file_from_s3 = PythonOperator( task_id="download_file_from_s3", python_callable=download_file_from_s3, dag=dag ) ``` The logged output from 2.0.2 ``` *** Fetching from: http://ba1b92003f54:8793/log/tmp/download_file_from_s3ile/2021-05-28T17:25:58.851532+00:00/1.log [2021-05-28 10:26:04,227] {executor_loader.py:82} DEBUG - Loading core executor: CeleryExecutor [2021-05-28 10:26:04,239] {__init__.py:51} DEBUG - Loading core task runner: StandardTaskRunner [2021-05-28 10:26:04,252] {base_task_runner.py:62} DEBUG - Planning to run as the user [2021-05-28 10:26:04,255] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> from DB [2021-05-28 10:26:04,264] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> [2021-05-28 10:26:04,264] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks. [2021-05-28 10:26:04,265] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Task Instance Not Running' PASSED: True, Task is not in running state. [2021-05-28 10:26:04,279] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set. [2021-05-28 10:26:04,280] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying. [2021-05-28 10:26:04,280] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Task Instance State' PASSED: True, Task state queued was valid. [2021-05-28 10:26:04,280] {taskinstance.py:877} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> [2021-05-28 10:26:04,281] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks. [2021-05-28 10:26:04,291] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Pool Slots Available' PASSED: True, ('There are enough open slots in %s to execute the task', 'default_pool') [2021-05-28 10:26:04,301] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set. [2021-05-28 10:26:04,301] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying. [2021-05-28 10:26:04,301] {taskinstance.py:892} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> dependency 'Task Concurrency' PASSED: True, Task concurrency is not set. [2021-05-28 10:26:04,301] {taskinstance.py:877} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [queued]> [2021-05-28 10:26:04,301] {taskinstance.py:1068} INFO - -------------------------------------------------------------------------------- [2021-05-28 10:26:04,302] {taskinstance.py:1069} INFO - Starting attempt 1 of 1 [2021-05-28 10:26:04,302] {taskinstance.py:1070} INFO - -------------------------------------------------------------------------------- [2021-05-28 10:26:04,317] {taskinstance.py:1089} INFO - Executing <Task(PythonOperator): download_file_from_s3ile> on 2021-05-28T17:25:58.851532+00:00 [2021-05-28 10:26:04,324] {standard_task_runner.py:52} INFO - Started process 118 to run task [2021-05-28 10:26:04,331] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'tmp', 'download_file_from_s3ile', '2021-05-28T17:25:58.851532+00:00', '--job-id', '6', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/tmp_dag.py', '--cfg-path', '/tmp/tmpuz8u2gva', '--error-file', '/tmp/tmpms02c24z'] [2021-05-28 10:26:04,333] {standard_task_runner.py:77} INFO - Job 6: Subtask download_file_from_s3ile [2021-05-28 10:26:04,334] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x7f348514f0e0>] [2021-05-28 10:26:04,350] {settings.py:210} DEBUG - Setting up DB connection pool (PID 118) [2021-05-28 10:26:04,351] {settings.py:243} DEBUG - settings.prepare_engine_args(): Using NullPool [2021-05-28 10:26:04,357] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [None]> from DB [2021-05-28 10:26:04,377] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> [2021-05-28 10:26:04,391] {logging_mixin.py:104} INFO - Running <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> on host ba1b92003f54 [2021-05-28 10:26:04,395] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> from DB [2021-05-28 10:26:04,401] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> [2021-05-28 10:26:04,406] {taskinstance.py:658} DEBUG - Clearing XCom data [2021-05-28 10:26:04,413] {taskinstance.py:665} DEBUG - XCom data cleared [2021-05-28 10:26:04,438] {taskinstance.py:1283} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=airflow AIRFLOW_CTX_DAG_ID=tmp AIRFLOW_CTX_TASK_ID=download_file_from_s3ile AIRFLOW_CTX_EXECUTION_DATE=2021-05-28T17:25:58.851532+00:00 AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-28T17:25:58.851532+00:00 [2021-05-28 10:26:04,438] {__init__.py:146} DEBUG - Preparing lineage inlets and outlets [2021-05-28 10:26:04,438] {__init__.py:190} DEBUG - inlets: [], outlets: [] [2021-05-28 10:26:04,439] {base_aws.py:362} INFO - Airflow Connection: aws_conn_id=aws_default [2021-05-28 10:26:04,446] {base_aws.py:385} WARNING - Unable to use Airflow Connection for credentials. [2021-05-28 10:26:04,446] {base_aws.py:386} INFO - Fallback on boto3 credential strategy [2021-05-28 10:26:04,446] {base_aws.py:391} INFO - Creating session using boto3 credential strategy region_name=None [2021-05-28 10:26:04,448] {hooks.py:417} DEBUG - Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane [2021-05-28 10:26:04,450] {hooks.py:417} DEBUG - Changing event name from before-call.apigateway to before-call.api-gateway [2021-05-28 10:26:04,451] {hooks.py:417} DEBUG - Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict [2021-05-28 10:26:04,452] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration [2021-05-28 10:26:04,453] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 [2021-05-28 10:26:04,453] {hooks.py:417} DEBUG - Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search [2021-05-28 10:26:04,454] {hooks.py:417} DEBUG - Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section [2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask [2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section [2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search [2021-05-28 10:26:04,457] {hooks.py:417} DEBUG - Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section [2021-05-28 10:26:04,471] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/boto3/data/s3/2006-03-01/resources-1.json [2021-05-28 10:26:04,477] {credentials.py:1961} DEBUG - Looking for credentials via: env [2021-05-28 10:26:04,477] {credentials.py:1087} INFO - Found credentials in environment variables. [2021-05-28 10:26:04,477] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/endpoints.json [2021-05-28 10:26:04,483] {hooks.py:210} DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f347f1165f0> [2021-05-28 10:26:04,494] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/s3/2006-03-01/service-2.json [2021-05-28 10:26:04,505] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x7f347f1bd170> [2021-05-28 10:26:04,505] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function lazy_call.<locals>._handler at 0x7f3453f7f170> [2021-05-28 10:26:04,506] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x7f347f1b9ef0> [2021-05-28 10:26:04,510] {endpoint.py:291} DEBUG - Setting s3 timeout as (60, 60) [2021-05-28 10:26:04,511] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/_retry.json [2021-05-28 10:26:04,512] {client.py:164} DEBUG - Registering retry handlers for service: s3 [2021-05-28 10:26:04,513] {factory.py:66} DEBUG - Loading s3:s3 [2021-05-28 10:26:04,515] {factory.py:66} DEBUG - Loading s3:Bucket [2021-05-28 10:26:04,515] {model.py:358} DEBUG - Renaming Bucket attribute name [2021-05-28 10:26:04,516] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Bucket: calling handler <function lazy_call.<locals>._handler at 0x7f3453ecbe60> [2021-05-28 10:26:04,517] {factory.py:66} DEBUG - Loading s3:Object [2021-05-28 10:26:04,519] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Object: calling handler <function lazy_call.<locals>._handler at 0x7f3453ecb3b0> [2021-05-28 10:26:04,520] {utils.py:599} DEBUG - Acquiring 0 [2021-05-28 10:26:04,521] {tasks.py:194} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>}) about to wait for the following futures [] [2021-05-28 10:26:04,521] {tasks.py:203} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>}) done waiting for dependent futures [2021-05-28 10:26:04,521] {tasks.py:147} DEBUG - Executing task DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>}) with kwargs {'client': <botocore.client.S3 object at 0x7f3453215d10>, 'config': <boto3.s3.transfer.TransferConfig object at 0x7f3453181390>, 'osutil': <s3transfer.utils.OSUtils object at 0x7f3453181510>, 'request_executor': <s3transfer.futures.BoundedExecutor object at 0x7f3453181190>, 'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f34531fcc50>, 'io_executor': <s3transfer.futures.BoundedExecutor object at 0x7f34531fced0>} [2021-05-28 10:26:04,522] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function sse_md5 at 0x7f347f133a70> [2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function validate_bucket_name at 0x7f347f1339e0> [2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>> [2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f34531d0250>> [2021-05-28 10:26:04,523] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function generate_idempotent_uuid at 0x7f347f133830> [2021-05-28 10:26:04,524] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function add_expect_header at 0x7f347f133d40> [2021-05-28 10:26:04,524] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>> [2021-05-28 10:26:04,525] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function inject_api_version_header_if_needed at 0x7f347f13b0e0> [2021-05-28 10:26:04,525] {endpoint.py:101} DEBUG - Making request for OperationModel(name=HeadObject) with params: {'url_path': '[REDACT]', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': b'', 'url': 'https://s3.amazonaws.com/[REDACT]/tmp.txt', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f345321a710>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}} [2021-05-28 10:26:04,526] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_not_transferring at 0x7f347ee7de60> [2021-05-28 10:26:04,526] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f3453215e90>> [2021-05-28 10:26:04,527] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f3453f046d0>> [2021-05-28 10:26:04,527] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <function set_operation_specific_signer at 0x7f347f133710> [2021-05-28 10:26:04,527] {hooks.py:210} DEBUG - Event before-sign.s3.HeadObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f34531d0710>> [2021-05-28 10:26:04,527] {utils.py:1639} DEBUG - Defaulting to S3 virtual host style addressing with path style addressing fallback. [2021-05-28 10:26:04,528] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: https://s3.amazonaws.com/[REDACT]/tmp.txt [2021-05-28 10:26:04,528] {utils.py:1036} DEBUG - URI updated to: https://[REDACT].s3.amazonaws.com/tmp.txt [2021-05-28 10:26:04,528] {auth.py:364} DEBUG - Calculating signature using v4 auth. [2021-05-28 10:26:04,529] {auth.py:365} DEBUG - CanonicalRequest: HEAD /tmp.txt [REDACT] [2021-05-28 10:26:04,529] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_transferring at 0x7f347ee8a320> [2021-05-28 10:26:04,529] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=https://[REDACT].s3.amazonaws.com/tmp.txt, headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T172604Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT]', SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}> [2021-05-28 10:26:04,531] {connectionpool.py:943} DEBUG - Starting new HTTPS connection (1): [REDACT].s3.amazonaws.com:443 [2021-05-28 10:26:05,231] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "HEAD /tmp.txt HTTP/1.1" 200 0 [2021-05-28 10:26:05,232] {parsers.py:233} DEBUG - Response headers: {'x-amz-id-2': 'o[REDACT]', 'x-amz-request-id': '[REDACT]', 'Date': 'Fri, 28 May 2021 17:26:06 GMT', 'Last-Modified': 'Thu, 27 May 2021 20:37:55 GMT', 'ETag': '"[REDACT]"', 'x-amz-server-side-encryption': 'AES256', 'x-amz-version-id': '[REDACT]', 'Accept-Ranges': 'bytes', 'Content-Type': 'text/plain', 'Content-Length': '5', 'Server': 'AmazonS3'} [2021-05-28 10:26:05,232] {parsers.py:234} DEBUG - Response body: b'' [2021-05-28 10:26:05,234] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f345321ab50> [2021-05-28 10:26:05,235] {retryhandler.py:187} DEBUG - No retry needed. [2021-05-28 10:26:05,235] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>> [2021-05-28 10:26:05,236] {futures.py:318} DEBUG - Submitting task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) to executor <s3transfer.futures.BoundedExecutor object at 0x7f3453181190> for transfer request: 0. [2021-05-28 10:26:05,236] {utils.py:599} DEBUG - Acquiring 0 [2021-05-28 10:26:05,236] {tasks.py:194} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) about to wait for the following futures [] [2021-05-28 10:26:05,237] {tasks.py:203} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) done waiting for dependent futures [2021-05-28 10:26:05,237] {tasks.py:147} DEBUG - Executing task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) with kwargs {'client': <botocore.client.S3 object at 0x7f3453215d10>, 'bucket': '[REDACT]', 'key': 'tmp.txt', 'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f34531fc890>, 'extra_args': {}, 'callbacks': [], 'max_attempts': 5, 'download_output_manager': <s3transfer.download.DownloadFilenameOutputManager object at 0x7f34531fc7d0>, 'io_chunksize': 262144, 'bandwidth_limiter': None} [2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function sse_md5 at 0x7f347f133a70> [2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function validate_bucket_name at 0x7f347f1339e0> [2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>> [2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f34531d0250>> [2021-05-28 10:26:05,238] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function generate_idempotent_uuid at 0x7f347f133830> [2021-05-28 10:26:05,239] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function add_expect_header at 0x7f347f133d40> [2021-05-28 10:26:05,239] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>> [2021-05-28 10:26:05,239] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function inject_api_version_header_if_needed at 0x7f347f13b0e0> [2021-05-28 10:26:05,240] {utils.py:612} DEBUG - Releasing acquire 0/None [2021-05-28 10:26:05,240] {endpoint.py:101} DEBUG - Making request for OperationModel(name=GetObject) with params: {'url_path': '/[REDACT]/tmp.txt', 'query_string': {}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': b'', 'url': '[REDACT]', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f345321a710>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}} [2021-05-28 10:26:05,241] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_not_transferring at 0x7f347ee7de60> [2021-05-28 10:26:05,241] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f3453215e90>> [2021-05-28 10:26:05,241] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f3453f046d0>> [2021-05-28 10:26:05,242] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <function set_operation_specific_signer at 0x7f347f133710> [2021-05-28 10:26:05,242] {hooks.py:210} DEBUG - Event before-sign.s3.GetObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f34531d0710>> [2021-05-28 10:26:05,242] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: [REDACT] [2021-05-28 10:26:05,242] {utils.py:1036} DEBUG - URI updated to: [REDACT] [2021-05-28 10:26:05,243] {auth.py:364} DEBUG - Calculating signature using v4 auth. [2021-05-28 10:26:05,243] {auth.py:365} DEBUG - CanonicalRequest: GET /tmp.txt [REDACT] [2021-05-28 10:26:05,243] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_transferring at 0x7f347ee8a320> [2021-05-28 10:26:05,243] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=[REDACT], headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T172605Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT], SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}> [2021-05-28 10:26:05,402] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "GET /tmp.txt HTTP/1.1" 200 5 [2021-05-28 10:26:05,402] {parsers.py:233} DEBUG - Response headers: {'x-amz-id-2': '[REDACT]', 'x-amz-request-id': '[REDACT]', 'Date': 'Fri, 28 May 2021 17:26:06 GMT', 'Last-Modified': 'Thu, 27 May 2021 20:37:55 GMT', 'ETag': '"[REDACT]"', 'x-amz-server-side-encryption': 'AES256', 'x-amz-version-id': '[REDACT]', 'Accept-Ranges': 'bytes', 'Content-Type': 'text/plain', 'Content-Length': '5', 'Server': 'AmazonS3'} [2021-05-28 10:26:05,403] {parsers.py:234} DEBUG - Response body: <botocore.response.StreamingBody object at 0x7f345310d090> [2021-05-28 10:26:05,404] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f345321ab50> [2021-05-28 10:26:05,404] {retryhandler.py:187} DEBUG - No retry needed. [2021-05-28 10:26:05,404] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f345321aa90>> [2021-05-28 10:26:05,405] {tasks.py:194} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) about to wait for the following futures [] [2021-05-28 10:26:05,406] {tasks.py:203} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) done waiting for dependent futures [2021-05-28 10:26:05,406] {tasks.py:147} DEBUG - Executing task IOWriteTask(transfer_id=0, {'offset': 0}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f34531fc890>, 'offset': 0} [2021-05-28 10:26:05,407] {tasks.py:194} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) about to wait for the following futures [] [2021-05-28 10:26:05,407] {tasks.py:203} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) done waiting for dependent futures [2021-05-28 10:26:05,408] {tasks.py:147} DEBUG - Executing task IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f34531fc890>, 'final_filename': '/tmp/s3_hook.txt', 'osutil': <s3transfer.utils.OSUtils object at 0x7f3453181510>} [2021-05-28 10:26:05,409] {utils.py:612} DEBUG - Releasing acquire 0/None [2021-05-28 10:26:05,412] {tmp_dag.py:21} INFO - File downloaded: /tmp/s3_hook.txt [2021-05-28 10:26:05,413] {tmp_dag.py:24} INFO - FILE CONTENT test [2021-05-28 10:26:05,413] {python.py:118} INFO - Done. Returned value was: None [2021-05-28 10:26:05,413] {__init__.py:107} DEBUG - Lineage called with inlets: [], outlets: [] [2021-05-28 10:26:05,413] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> from DB [2021-05-28 10:26:05,421] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> [2021-05-28 10:26:05,423] {taskinstance.py:1192} INFO - Marking task as SUCCESS. dag_id=tmp, task_id=download_file_from_s3ile, execution_date=20210528T172558, start_date=20210528T172604, end_date=20210528T172605 [2021-05-28 10:26:05,423] {taskinstance.py:1891} DEBUG - Task Duration set to 1.141694 [2021-05-28 10:26:05,455] {dagrun.py:491} DEBUG - number of tis tasks for <DagRun tmp @ 2021-05-28 17:25:58.851532+00:00: manual__2021-05-28T17:25:58.851532+00:00, externally triggered: True>: 0 task(s) [2021-05-28 10:26:05,456] {taskinstance.py:1246} INFO - 0 downstream tasks scheduled from follow-on schedule check [2021-05-28 10:26:05,457] {cli_action_loggers.py:84} DEBUG - Calling callbacks: [] [2021-05-28 10:26:05,510] {local_task_job.py:146} INFO - Task exited with return code 0 [2021-05-28 10:26:05,511] {taskinstance.py:595} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [running]> from DB [2021-05-28 10:26:05,524] {taskinstance.py:630} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:25:58.851532+00:00 [success]> ``` ⚠️ notice the file content (`test`) is properly shown in the log The logged output from 2.1.0 ```*** Log file does not exist: /usr/local/airflow/logs/tmp/download_file_from_s3ile/2021-05-28T17:36:09.750993+00:00/1.log *** Fetching from: http://f2ffe4375669:8793/log/tmp/download_file_from_s3ile/2021-05-28T17:36:09.750993+00:00/1.log [2021-05-28 10:36:14,758] {executor_loader.py:82} DEBUG - Loading core executor: CeleryExecutor [2021-05-28 10:36:14,769] {__init__.py:51} DEBUG - Loading core task runner: StandardTaskRunner [2021-05-28 10:36:14,779] {base_task_runner.py:62} DEBUG - Planning to run as the user [2021-05-28 10:36:14,781] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> from DB [2021-05-28 10:36:14,788] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> [2021-05-28 10:36:14,789] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks. [2021-05-28 10:36:14,789] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Task Instance Not Running' PASSED: True, Task is not in running state. [2021-05-28 10:36:14,789] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Task Instance State' PASSED: True, Task state queued was valid. [2021-05-28 10:36:14,793] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying. [2021-05-28 10:36:14,793] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set. [2021-05-28 10:36:14,793] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> [2021-05-28 10:36:14,793] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks. [2021-05-28 10:36:14,800] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Pool Slots Available' PASSED: True, ('There are enough open slots in %s to execute the task', 'default_pool') [2021-05-28 10:36:14,808] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying. [2021-05-28 10:36:14,810] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set. [2021-05-28 10:36:14,810] {taskinstance.py:891} DEBUG - <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> dependency 'Task Concurrency' PASSED: True, Task concurrency is not set. [2021-05-28 10:36:14,810] {taskinstance.py:876} INFO - Dependencies all met for <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [queued]> [2021-05-28 10:36:14,810] {taskinstance.py:1067} INFO - -------------------------------------------------------------------------------- [2021-05-28 10:36:14,810] {taskinstance.py:1068} INFO - Starting attempt 1 of 1 [2021-05-28 10:36:14,811] {taskinstance.py:1069} INFO - -------------------------------------------------------------------------------- [2021-05-28 10:36:14,823] {taskinstance.py:1087} INFO - Executing <Task(PythonOperator): download_file_from_s3ile> on 2021-05-28T17:36:09.750993+00:00 [2021-05-28 10:36:14,830] {standard_task_runner.py:52} INFO - Started process 116 to run task [2021-05-28 10:36:14,836] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'tmp', 'download_file_from_s3ile', '2021-05-28T17:36:09.750993+00:00', '--job-id', '8', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/tmp_dag.py', '--cfg-path', '/tmp/tmplhbjfxop', '--error-file', '/tmp/tmpdbeh5gr9'] [2021-05-28 10:36:14,839] {standard_task_runner.py:77} INFO - Job 8: Subtask download_file_from_s3ile [2021-05-28 10:36:14,841] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x7f2e2920f5f0>] [2021-05-28 10:36:14,860] {settings.py:210} DEBUG - Setting up DB connection pool (PID 116) [2021-05-28 10:36:14,860] {settings.py:246} DEBUG - settings.prepare_engine_args(): Using NullPool [2021-05-28 10:36:14,864] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [None]> from DB [2021-05-28 10:36:14,883] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> [2021-05-28 10:36:14,893] {logging_mixin.py:104} INFO - Running <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> on host f2ffe4375669 [2021-05-28 10:36:14,896] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> from DB [2021-05-28 10:36:14,902] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> [2021-05-28 10:36:14,917] {taskinstance.py:657} DEBUG - Clearing XCom data [2021-05-28 10:36:14,925] {taskinstance.py:664} DEBUG - XCom data cleared [2021-05-28 10:36:14,947] {taskinstance.py:1282} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=*** AIRFLOW_CTX_DAG_ID=tmp AIRFLOW_CTX_TASK_ID=download_file_from_s3ile AIRFLOW_CTX_EXECUTION_DATE=2021-05-28T17:36:09.750993+00:00 AIRFLOW_CTX_DAG_RUN_ID=manual__2021-05-28T17:36:09.750993+00:00 [2021-05-28 10:36:14,948] {__init__.py:146} DEBUG - Preparing lineage inlets and outlets [2021-05-28 10:36:14,948] {__init__.py:190} DEBUG - inlets: [], outlets: [] [2021-05-28 10:36:14,949] {base_aws.py:362} INFO - Airflow Connection: aws_conn_id=aws_default [2021-05-28 10:36:14,958] {base_aws.py:385} WARNING - Unable to use Airflow Connection for credentials. [2021-05-28 10:36:14,958] {base_aws.py:386} INFO - Fallback on boto3 credential strategy [2021-05-28 10:36:14,958] {base_aws.py:391} INFO - Creating session using boto3 credential strategy region_name=None [2021-05-28 10:36:14,960] {hooks.py:417} DEBUG - Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane [2021-05-28 10:36:14,962] {hooks.py:417} DEBUG - Changing event name from before-call.apigateway to before-call.api-gateway [2021-05-28 10:36:14,962] {hooks.py:417} DEBUG - Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict [2021-05-28 10:36:14,965] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration [2021-05-28 10:36:14,965] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.route53 to before-parameter-build.route-53 [2021-05-28 10:36:14,965] {hooks.py:417} DEBUG - Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search [2021-05-28 10:36:14,966] {hooks.py:417} DEBUG - Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section [2021-05-28 10:36:14,968] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask [2021-05-28 10:36:14,969] {hooks.py:417} DEBUG - Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section [2021-05-28 10:36:14,969] {hooks.py:417} DEBUG - Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search [2021-05-28 10:36:14,969] {hooks.py:417} DEBUG - Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section [2021-05-28 10:36:14,982] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/boto3/data/s3/2006-03-01/resources-1.json [2021-05-28 10:36:14,986] {credentials.py:1961} DEBUG - Looking for credentials via: env [2021-05-28 10:36:14,986] {credentials.py:1087} INFO - Found credentials in environment variables. [2021-05-28 10:36:14,987] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/endpoints.json [2021-05-28 10:36:14,992] {hooks.py:210} DEBUG - Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f2e22e7b7a0> [2021-05-28 10:36:15,002] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/s3/2006-03-01/service-2.json [2021-05-28 10:36:15,010] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x7f2e22ea5320> [2021-05-28 10:36:15,010] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function lazy_call.<locals>._handler at 0x7f2df976ee60> [2021-05-28 10:36:15,011] {hooks.py:210} DEBUG - Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x7f2e22ea50e0> [2021-05-28 10:36:15,015] {endpoint.py:291} DEBUG - Setting s3 timeout as (60, 60) [2021-05-28 10:36:15,017] {loaders.py:174} DEBUG - Loading JSON file: /usr/local/lib/python3.7/site-packages/botocore/data/_retry.json [2021-05-28 10:36:15,017] {client.py:164} DEBUG - Registering retry handlers for service: s3 [2021-05-28 10:36:15,018] {factory.py:66} DEBUG - Loading s3:s3 [2021-05-28 10:36:15,019] {factory.py:66} DEBUG - Loading s3:Bucket [2021-05-28 10:36:15,020] {model.py:358} DEBUG - Renaming Bucket attribute name [2021-05-28 10:36:15,021] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Bucket: calling handler <function lazy_call.<locals>._handler at 0x7f2df9762d40> [2021-05-28 10:36:15,021] {factory.py:66} DEBUG - Loading s3:Object [2021-05-28 10:36:15,022] {hooks.py:210} DEBUG - Event creating-resource-class.s3.Object: calling handler <function lazy_call.<locals>._handler at 0x7f2df977fa70> [2021-05-28 10:36:15,023] {utils.py:599} DEBUG - Acquiring 0 [2021-05-28 10:36:15,024] {tasks.py:194} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>}) about to wait for the following futures [] [2021-05-28 10:36:15,024] {tasks.py:203} DEBUG - DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>}) done waiting for dependent futures [2021-05-28 10:36:15,025] {tasks.py:147} DEBUG - Executing task DownloadSubmissionTask(transfer_id=0, {'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>}) with kwargs {'client': <botocore.client.S3 object at 0x7f2df9721390>, 'config': <boto3.s3.transfer.TransferConfig object at 0x7f2df921fe90>, 'osutil': <s3transfer.utils.OSUtils object at 0x7f2df921ffd0>, 'request_executor': <s3transfer.futures.BoundedExecutor object at 0x7f2df921fcd0>, 'transfer_future': <s3transfer.futures.TransferFuture object at 0x7f2df921f790>, 'io_executor': <s3transfer.futures.BoundedExecutor object at 0x7f2df921fa50>} [2021-05-28 10:36:15,025] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function sse_md5 at 0x7f2e22e18c20> [2021-05-28 10:36:15,025] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function validate_bucket_name at 0x7f2e22e18b90> [2021-05-28 10:36:15,025] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>> [2021-05-28 10:36:15,026] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f2df92be3d0>> [2021-05-28 10:36:15,026] {hooks.py:210} DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function generate_idempotent_uuid at 0x7f2e22e189e0> [2021-05-28 10:36:15,027] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function add_expect_header at 0x7f2e22e18ef0> [2021-05-28 10:36:15,027] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>> [2021-05-28 10:36:15,027] {hooks.py:210} DEBUG - Event before-call.s3.HeadObject: calling handler <function inject_api_version_header_if_needed at 0x7f2e22e1f290> [2021-05-28 10:36:15,027] {endpoint.py:101} DEBUG - Making request for OperationModel(name=HeadObject) with params: {'url_path': '/[REDACT]/tmp.txt', 'query_string': {}, 'method': 'HEAD', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': [], 'url': 'https://s3.amazonaws.com/[REDACT]/tmp.txt', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f2df92be350>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}} [2021-05-28 10:36:15,028] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_not_transferring at 0x7f2e22b9f170> [2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f2df92b7a10>> [2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f2df96db510>> [2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event choose-signer.s3.HeadObject: calling handler <function set_operation_specific_signer at 0x7f2e22e188c0> [2021-05-28 10:36:15,029] {hooks.py:210} DEBUG - Event before-sign.s3.HeadObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f2df92756d0>> [2021-05-28 10:36:15,029] {utils.py:1639} DEBUG - Defaulting to S3 virtual host style addressing with path style addressing fallback. [2021-05-28 10:36:15,029] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: https://s3.amazonaws.com/[REDACT]/tmp.txt [2021-05-28 10:36:15,030] {utils.py:1036} DEBUG - URI updated to: https://[REDACT].s3.amazonaws.com/tmp.txt [2021-05-28 10:36:15,030] {auth.py:364} DEBUG - Calculating signature using v4 auth. [2021-05-28 10:36:15,030] {auth.py:365} DEBUG - CanonicalRequest: HEAD /tmp.txt [REDACT] [2021-05-28 10:36:15,031] {hooks.py:210} DEBUG - Event request-created.s3.HeadObject: calling handler <function signal_transferring at 0x7f2e22baa4d0> [2021-05-28 10:36:15,031] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=HEAD, url=https://[REDACT].s3.amazonaws.com/tmp.txt, headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T173615Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT], SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}> [2021-05-28 10:36:15,032] {connectionpool.py:943} DEBUG - Starting new HTTPS connection (1): [REDACT].s3.amazonaws.com:443 [2021-05-28 10:36:15,695] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "HEAD /tmp.txt HTTP/1.1" 200 0 [2021-05-28 10:36:15,696] {parsers.py:233} DEBUG - Response headers: ['x-amz-id-2', 'x-amz-request-id', 'Date', 'Last-Modified', 'ETag', 'x-amz-server-side-encryption', 'x-amz-version-id', 'Accept-Ranges', 'Content-Type', 'Content-Length', 'Server'] [2021-05-28 10:36:15,696] {parsers.py:234} DEBUG - Response body: [] [2021-05-28 10:36:15,697] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f2df926ef90> [2021-05-28 10:36:15,698] {retryhandler.py:187} DEBUG - No retry needed. [2021-05-28 10:36:15,698] {hooks.py:210} DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>> [2021-05-28 10:36:15,698] {futures.py:318} DEBUG - Submitting task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) to executor <s3transfer.futures.BoundedExecutor object at 0x7f2df921fcd0> for transfer request: 0. [2021-05-28 10:36:15,698] {utils.py:599} DEBUG - Acquiring 0 [2021-05-28 10:36:15,699] {tasks.py:194} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) about to wait for the following futures [] [2021-05-28 10:36:15,699] {tasks.py:203} DEBUG - ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) done waiting for dependent futures [2021-05-28 10:36:15,699] {tasks.py:147} DEBUG - Executing task ImmediatelyWriteIOGetObjectTask(transfer_id=0, {'bucket': '[REDACT]', 'key': 'tmp.txt', 'extra_args': {}}) with kwargs {'client': <botocore.client.S3 object at 0x7f2df9721390>, 'bucket': '[REDACT]', 'key': 'tmp.txt', 'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f2df97dc0d0>, 'extra_args': {}, 'callbacks': [], 'max_attempts': 5, 'download_output_manager': <s3transfer.download.DownloadFilenameOutputManager object at 0x7f2df976b310>, 'io_chunksize': 262144, 'bandwidth_limiter': None} [2021-05-28 10:36:15,699] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function sse_md5 at 0x7f2e22e18c20> [2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function validate_bucket_name at 0x7f2e22e18b90> [2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>> [2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <bound method S3ArnParamHandler.handle_arn of <botocore.utils.S3ArnParamHandler object at 0x7f2df92be3d0>> [2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-parameter-build.s3.GetObject: calling handler <function generate_idempotent_uuid at 0x7f2e22e189e0> [2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function add_expect_header at 0x7f2e22e18ef0> [2021-05-28 10:36:15,700] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>> [2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event before-call.s3.GetObject: calling handler <function inject_api_version_header_if_needed at 0x7f2e22e1f290> [2021-05-28 10:36:15,701] {endpoint.py:101} DEBUG - Making request for OperationModel(name=GetObject) with params: {'url_path': '/[REDACT]/tmp.txt', 'query_string': {}, 'method': 'GET', 'headers': {'User-Agent': 'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource'}, 'body': [], 'url': 'https://s3.amazonaws.com/[REDACT]/tmp.txt', 'context': {'client_region': 'us-east-1', 'client_config': <botocore.config.Config object at 0x7f2df92be350>, 'has_streaming_input': False, 'auth_type': None, 'signing': {'bucket': '[REDACT]'}}} [2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_not_transferring at 0x7f2e22b9f170> [2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f2df92b7a10>> [2021-05-28 10:36:15,701] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <bound method ClientCreator._default_s3_presign_to_sigv2 of <botocore.client.ClientCreator object at 0x7f2df96db510>> [2021-05-28 10:36:15,702] {hooks.py:210} DEBUG - Event choose-signer.s3.GetObject: calling handler <function set_operation_specific_signer at 0x7f2e22e188c0> [2021-05-28 10:36:15,702] {hooks.py:210} DEBUG - Event before-sign.s3.GetObject: calling handler <bound method S3EndpointSetter.set_endpoint of <botocore.utils.S3EndpointSetter object at 0x7f2df92756d0>> [2021-05-28 10:36:15,702] {utils.py:1018} DEBUG - Checking for DNS compatible bucket for: https://s3.amazonaws.com/[REDACT]/tmp.txt [2021-05-28 10:36:15,702] {utils.py:1036} DEBUG - URI updated to: https://[REDACT].s3.amazonaws.com/tmp.txt [2021-05-28 10:36:15,702] {utils.py:612} DEBUG - Releasing acquire 0/None [2021-05-28 10:36:15,702] {auth.py:364} DEBUG - Calculating signature using v4 auth. [2021-05-28 10:36:15,703] {auth.py:365} DEBUG - CanonicalRequest: GET /tmp.txt [REDACT] [2021-05-28 10:36:15,703] {hooks.py:210} DEBUG - Event request-created.s3.GetObject: calling handler <function signal_transferring at 0x7f2e22baa4d0> [2021-05-28 10:36:15,703] {endpoint.py:187} DEBUG - Sending http request: <AWSPreparedRequest stream_output=True, method=GET, url=https://[REDACT].s3.amazonaws.com/tmp.txt, headers={'User-Agent': b'Boto3/1.15.18 Python/3.7.10 Linux/5.10.25-linuxkit Botocore/1.18.18 Resource', 'X-Amz-Date': b'20210528T173615Z', 'X-Amz-Content-SHA256': b'[REDACT]', 'Authorization': b'[REDACT], SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=[REDACT]'}> [2021-05-28 10:36:15,879] {connectionpool.py:442} DEBUG - https://[REDACT].s3.amazonaws.com:443 "GET /tmp.txt HTTP/1.1" 200 5 [2021-05-28 10:36:15,879] {parsers.py:233} DEBUG - Response headers: ['x-amz-id-2', 'x-amz-request-id', 'Date', 'Last-Modified', 'ETag', 'x-amz-server-side-encryption', 'x-amz-version-id', 'Accept-Ranges', 'Content-Type', 'Content-Length', 'Server'] [2021-05-28 10:36:15,879] {parsers.py:234} DEBUG - Response body: [[116, 101, 115, 116, 10]] [2021-05-28 10:36:15,883] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f2df926ef90> [2021-05-28 10:36:15,883] {retryhandler.py:187} DEBUG - No retry needed. [2021-05-28 10:36:15,883] {hooks.py:210} DEBUG - Event needs-retry.s3.GetObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f2df926ed10>> [2021-05-28 10:36:15,883] {tasks.py:194} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) about to wait for the following futures [] [2021-05-28 10:36:15,885] {tasks.py:203} DEBUG - IOWriteTask(transfer_id=0, {'offset': 0}) done waiting for dependent futures [2021-05-28 10:36:15,885] {tasks.py:147} DEBUG - Executing task IOWriteTask(transfer_id=0, {'offset': 0}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f2df97dc0d0>, 'offset': 0} [2021-05-28 10:36:15,885] {tasks.py:194} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) about to wait for the following futures [] [2021-05-28 10:36:15,886] {tasks.py:203} DEBUG - IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) done waiting for dependent futures [2021-05-28 10:36:15,886] {tasks.py:147} DEBUG - Executing task IORenameFileTask(transfer_id=0, {'final_filename': '/tmp/s3_hook.txt'}) with kwargs {'fileobj': <s3transfer.utils.DeferredOpenFile object at 0x7f2df97dc0d0>, 'final_filename': '/tmp/s3_hook.txt', 'osutil': <s3transfer.utils.OSUtils object at 0x7f2df921ffd0>} [2021-05-28 10:36:15,886] {utils.py:612} DEBUG - Releasing acquire 0/None [2021-05-28 10:36:15,887] {tmp_dag.py:21} INFO - File downloaded: /tmp/s3_hook.txt [2021-05-28 10:36:15,888] {tmp_dag.py:24} INFO - FILE CONTENT [2021-05-28 10:36:15,888] {python.py:151} INFO - Done. Returned value was: None [2021-05-28 10:36:15,888] {__init__.py:107} DEBUG - Lineage called with inlets: [], outlets: [] [2021-05-28 10:36:15,888] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> from DB [2021-05-28 10:36:15,893] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> [2021-05-28 10:36:15,894] {taskinstance.py:1191} INFO - Marking task as SUCCESS. dag_id=tmp, task_id=download_file_from_s3ile, execution_date=20210528T173609, start_date=20210528T173614, end_date=20210528T173615 [2021-05-28 10:36:15,894] {taskinstance.py:1888} DEBUG - Task Duration set to 1.100586 [2021-05-28 10:36:15,915] {dagrun.py:490} DEBUG - number of tis tasks for <DagRun tmp @ 2021-05-28 17:36:09.750993+00:00: manual__2021-05-28T17:36:09.750993+00:00, externally triggered: True>: 0 task(s) [2021-05-28 10:36:15,917] {taskinstance.py:1245} INFO - 0 downstream tasks scheduled from follow-on schedule check [2021-05-28 10:36:15,917] {cli_action_loggers.py:84} DEBUG - Calling callbacks: [] [2021-05-28 10:36:15,939] {local_task_job.py:151} INFO - Task exited with return code 0 [2021-05-28 10:36:15,939] {taskinstance.py:594} DEBUG - Refreshing TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [running]> from DB [2021-05-28 10:36:15,949] {taskinstance.py:629} DEBUG - Refreshed TaskInstance <TaskInstance: tmp.download_file_from_s3ile 2021-05-28T17:36:09.750993+00:00 [success]> ``` ⚠️ notice the file content is **NOT** properly shown in the log **Anything else we need to know**: pip freeze for 2.0.2: ``` adal==1.2.7 aiohttp==3.7.4.post0 alembic==1.6.5 amqp==2.6.1 ansiwrap==0.8.4 apache-airflow==2.0.2 apache-airflow-providers-amazon==1.2.0 apache-airflow-providers-celery==1.0.1 apache-airflow-providers-databricks==1.0.1 apache-airflow-providers-ftp==1.1.0 apache-airflow-providers-google==1.0.0 apache-airflow-providers-http==1.1.1 apache-airflow-providers-imap==1.0.1 apache-airflow-providers-jdbc==1.0.1 apache-airflow-providers-mongo==1.0.1 apache-airflow-providers-mysql==1.0.2 apache-airflow-providers-papermill==1.0.2 apache-airflow-providers-postgres==1.0.1 apache-airflow-providers-redis==1.0.1 apache-airflow-providers-salesforce==1.0.1 apache-airflow-providers-slack==3.0.0 apache-airflow-providers-snowflake==1.1.1 apache-airflow-providers-sqlite==1.0.2 apache-airflow-providers-ssh==1.2.0 apispec==3.3.2 appdirs==1.4.4 argcomplete==1.12.3 asn1crypto==1.4.0 async-generator==1.10 async-timeout==3.0.1 attrs==20.3.0 Authlib==0.15.3 avro-python3==1.10.0 azure-common==1.1.27 azure-core==1.14.0 azure-datalake-store==0.0.52 azure-storage-blob==12.8.1 Babel==2.9.1 backcall==0.2.0 bcrypt==3.2.0 billiard==3.6.4.0 black==21.5b1 blinker==1.4 boto3==1.15.18 botocore==1.18.18 cached-property==1.5.2 cachetools==4.2.2 cattrs==1.7.0 celery==4.4.7 Cerberus==1.3.2 certifi==2020.12.5 cffi==1.14.5 chardet==3.0.4 click==7.1.2 clickclick==20.10.2 colorama==0.4.4 colorlog==5.0.1 commonmark==0.9.1 connexion==2.7.0 croniter==0.3.37 cryptography==3.4.7 cycler==0.10.0 databricks-cli==0.14.3 databricks-connect==7.3.8 decorator==5.0.9 defusedxml==0.7.1 dill==0.3.3 dnspython==1.16.0 docutils==0.17.1 email-validator==1.1.2 entrypoints==0.3 Flask==1.1.4 Flask-AppBuilder==3.3.0 Flask-Babel==1.0.0 Flask-Bcrypt==0.7.1 Flask-Caching==1.10.1 Flask-JWT-Extended==3.25.1 Flask-Login==0.4.1 Flask-OpenID==1.2.5 Flask-SQLAlchemy==2.5.1 Flask-WTF==0.14.3 flower==0.9.5 fsspec==2021.5.0 gcsfs==2021.5.0 google-ads==7.0.0 google-api-core==1.26.0 google-api-python-client==1.12.8 google-auth==1.27.0 google-auth-httplib2==0.1.0 google-auth-oauthlib==0.4.4 google-cloud-automl==1.0.1 google-cloud-bigquery==2.17.0 google-cloud-bigquery-datatransfer==1.1.1 google-cloud-bigquery-storage==2.4.0 google-cloud-bigtable==1.7.0 google-cloud-container==1.0.1 google-cloud-core==1.6.0 google-cloud-datacatalog==0.7.0 google-cloud-dataproc==1.1.1 google-cloud-dlp==1.0.0 google-cloud-kms==1.4.0 google-cloud-language==1.3.0 google-cloud-logging==1.15.1 google-cloud-memcache==0.3.0 google-cloud-monitoring==1.1.0 google-cloud-os-login==1.0.0 google-cloud-pubsub==1.7.0 google-cloud-redis==1.0.0 google-cloud-secret-manager==1.0.0 google-cloud-spanner==1.19.1 google-cloud-speech==1.3.2 google-cloud-storage==1.38.0 google-cloud-tasks==1.5.0 google-cloud-texttospeech==1.0.1 google-cloud-translate==1.7.0 google-cloud-videointelligence==1.16.1 google-cloud-vision==1.0.0 google-crc32c==1.1.2 google-resumable-media==1.3.0 googleapis-common-protos==1.53.0 graphviz==0.16 grpc-google-iam-v1==0.12.3 grpcio==1.38.0 grpcio-gcp==0.2.2 gunicorn==19.10.0 httplib2==0.19.1 humanize==3.5.0 idna==2.10 importlib-metadata==1.7.0 importlib-resources==1.5.0 inflection==0.5.1 iniconfig==1.1.1 ipykernel==5.4.3 ipython==7.23.1 ipython-genutils==0.2.0 iso8601==0.1.14 isodate==0.6.0 itsdangerous==1.1.0 JayDeBeApi==1.2.3 jedi==0.18.0 Jinja2==2.11.3 jmespath==0.10.0 joblib==1.0.1 JPype1==1.2.1 jsonschema==3.2.0 jupyter-client==6.1.12 jupyter-core==4.7.1 kiwisolver==1.3.1 kombu==4.6.11 lazy-object-proxy==1.6.0 libcst==0.3.19 lockfile==0.12.2 Mako==1.1.4 Markdown==3.3.4 MarkupSafe==1.1.1 marshmallow==3.12.1 marshmallow-enum==1.5.1 marshmallow-oneofschema==2.1.0 marshmallow-sqlalchemy==0.23.1 matplotlib==3.3.4 matplotlib-inline==0.1.2 msrest==0.6.21 multidict==5.1.0 mypy-extensions==0.4.3 mysql-connector-python==8.0.22 mysqlclient==1.3.14 natsort==7.1.1 nbclient==0.5.3 nbformat==5.1.3 nest-asyncio==1.5.1 nteract-scrapbook==0.4.2 numpy==1.20.3 oauthlib==3.1.0 openapi-schema-validator==0.1.5 openapi-spec-validator==0.3.1 oscrypto==1.2.1 packaging==20.9 pandas==1.2.4 pandas-gbq==0.15.0 papermill==2.3.3 paramiko==2.7.2 parso==0.8.2 pathspec==0.8.1 pendulum==2.1.2 pexpect==4.8.0 pickleshare==0.7.5 Pillow==8.2.0 prison==0.1.3 prometheus-client==0.8.0 prompt-toolkit==3.0.18 proto-plus==1.18.1 protobuf==3.17.1 psutil==5.8.0 psycopg2-binary==2.8.6 ptyprocess==0.7.0 py4j==0.10.9 pyarrow==4.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.20 pycryptodomex==3.10.1 pydata-google-auth==1.2.0 Pygments==2.9.0 PyJWT==1.7.1 pymongo==3.11.4 PyNaCl==1.4.0 pyOpenSSL==20.0.1 pyparsing==2.4.7 pyrsistent==0.17.3 pysftp==0.2.9 python-daemon==2.3.0 python-dateutil==2.8.1 python-editor==1.0.4 python-nvd3==0.15.0 python-slugify==4.0.1 python3-openid==3.2.0 pytz==2021.1 pytzdata==2020.1 PyYAML==5.4.1 pyzmq==22.1.0 redis==3.5.3 regex==2021.4.4 requests==2.25.1 requests-oauthlib==1.3.0 rich==9.2.0 rsa==4.7.2 s3transfer==0.3.7 scikit-learn==0.24.1 scipy==1.6.3 setproctitle==1.2.2 simple-salesforce==1.11.1 six==1.16.0 slack-sdk==3.5.1 snowflake-connector-python==2.4.3 snowflake-sqlalchemy==1.2.4 SQLAlchemy==1.3.23 SQLAlchemy-JSONField==1.0.0 SQLAlchemy-Utils==0.37.4 sqlparse==0.4.1 sshtunnel==0.1.5 swagger-ui-bundle==0.0.8 tableauserverclient==0.15.0 tabulate==0.8.9 tenacity==6.2.0 termcolor==1.1.0 text-unidecode==1.3 textwrap3==0.9.2 threadpoolctl==2.1.0 toml==0.10.2 tornado==6.1 tqdm==4.61.0 traitlets==5.0.5 typed-ast==1.4.3 typing-extensions==3.10.0.0 typing-inspect==0.6.0 unicodecsv==0.14.1 uritemplate==3.0.1 urllib3==1.25.11 vine==1.3.0 watchtower==0.7.3 wcwidth==0.2.5 Werkzeug==1.0.1 WTForms==2.3.3 yarl==1.6.3 zipp==3.4.1 ``` pip freeze for 2.1.0: ``` adal==1.2.7 aiohttp==3.7.4.post0 alembic==1.6.5 amqp==2.6.1 ansiwrap==0.8.4 apache-airflow==2.1.0 apache-airflow-providers-amazon==1.2.0 apache-airflow-providers-celery==1.0.1 apache-airflow-providers-databricks==1.0.1 apache-airflow-providers-ftp==1.1.0 apache-airflow-providers-google==1.0.0 apache-airflow-providers-http==1.1.1 apache-airflow-providers-imap==1.0.1 apache-airflow-providers-jdbc==1.0.1 apache-airflow-providers-mongo==1.0.1 apache-airflow-providers-mysql==1.0.2 apache-airflow-providers-papermill==1.0.2 apache-airflow-providers-postgres==1.0.1 apache-airflow-providers-redis==1.0.1 apache-airflow-providers-salesforce==1.0.1 apache-airflow-providers-slack==3.0.0 apache-airflow-providers-snowflake==1.1.1 apache-airflow-providers-sqlite==1.0.2 apache-airflow-providers-ssh==1.2.0 apispec==3.3.2 appdirs==1.4.4 argcomplete==1.12.3 asn1crypto==1.4.0 async-generator==1.10 async-timeout==3.0.1 attrs==20.3.0 Authlib==0.15.3 avro-python3==1.10.0 azure-common==1.1.27 azure-core==1.14.0 azure-datalake-store==0.0.52 azure-storage-blob==12.8.1 Babel==2.9.1 backcall==0.2.0 bcrypt==3.2.0 billiard==3.6.4.0 black==21.5b1 blinker==1.4 boto3==1.15.18 botocore==1.18.18 cached-property==1.5.2 cachetools==4.2.2 cattrs==1.7.0 celery==4.4.7 Cerberus==1.3.2 certifi==2020.12.5 cffi==1.14.5 chardet==3.0.4 click==7.1.2 clickclick==20.10.2 colorama==0.4.4 colorlog==5.0.1 commonmark==0.9.1 croniter==1.0.13 cryptography==3.4.7 cycler==0.10.0 databricks-cli==0.14.3 databricks-connect==7.3.8 decorator==5.0.9 defusedxml==0.7.1 dill==0.3.3 dnspython==1.16.0 docutils==0.17.1 email-validator==1.1.2 entrypoints==0.3 Flask==1.1.4 Flask-AppBuilder==3.3.0 Flask-Babel==1.0.0 Flask-Bcrypt==0.7.1 Flask-Caching==1.10.1 Flask-JWT-Extended==3.25.1 Flask-Login==0.4.1 Flask-OpenID==1.2.5 Flask-SQLAlchemy==2.5.1 Flask-WTF==0.14.3 flower==0.9.5 fsspec==2021.5.0 gcsfs==2021.5.0 google-ads==7.0.0 google-api-core==1.26.0 google-api-python-client==1.12.8 google-auth==1.27.0 google-auth-httplib2==0.1.0 google-auth-oauthlib==0.4.4 google-cloud-automl==1.0.1 google-cloud-bigquery==2.17.0 google-cloud-bigquery-datatransfer==1.1.1 google-cloud-bigquery-storage==2.4.0 google-cloud-bigtable==1.7.0 google-cloud-container==1.0.1 google-cloud-core==1.6.0 google-cloud-datacatalog==0.7.0 google-cloud-dataproc==1.1.1 google-cloud-dlp==1.0.0 google-cloud-kms==1.4.0 google-cloud-language==1.3.0 google-cloud-logging==1.15.1 google-cloud-memcache==0.3.0 google-cloud-monitoring==1.1.0 google-cloud-os-login==1.0.0 google-cloud-pubsub==1.7.0 google-cloud-redis==1.0.0 google-cloud-secret-manager==1.0.0 google-cloud-spanner==1.19.1 google-cloud-speech==1.3.2 google-cloud-storage==1.38.0 google-cloud-tasks==1.5.0 google-cloud-texttospeech==1.0.1 google-cloud-translate==1.7.0 google-cloud-videointelligence==1.16.1 google-cloud-vision==1.0.0 google-crc32c==1.1.2 google-resumable-media==1.3.0 googleapis-common-protos==1.53.0 graphviz==0.16 grpc-google-iam-v1==0.12.3 grpcio==1.38.0 grpcio-gcp==0.2.2 gunicorn==20.1.0 h11==0.12.0 httpcore==0.13.3 httplib2==0.19.1 httpx==0.18.1 humanize==3.5.0 idna==2.10 importlib-metadata==1.7.0 importlib-resources==1.5.0 inflection==0.5.1 iniconfig==1.1.1 ipykernel==5.4.3 ipython==7.23.1 ipython-genutils==0.2.0 iso8601==0.1.14 isodate==0.6.0 itsdangerous==1.1.0 JayDeBeApi==1.2.3 jedi==0.18.0 Jinja2==2.11.3 jmespath==0.10.0 joblib==1.0.1 JPype1==1.2.1 jsonschema==3.2.0 jupyter-client==6.1.12 jupyter-core==4.7.1 kiwisolver==1.3.1 kombu==4.6.11 lazy-object-proxy==1.6.0 libcst==0.3.19 lockfile==0.12.2 Mako==1.1.4 Markdown==3.3.4 MarkupSafe==1.1.1 marshmallow==3.12.1 marshmallow-enum==1.5.1 marshmallow-oneofschema==2.1.0 marshmallow-sqlalchemy==0.23.1 matplotlib==3.3.4 matplotlib-inline==0.1.2 msrest==0.6.21 multidict==5.1.0 mypy-extensions==0.4.3 mysql-connector-python==8.0.22 mysqlclient==1.3.14 nbclient==0.5.3 nbformat==5.1.3 nest-asyncio==1.5.1 nteract-scrapbook==0.4.2 numpy==1.20.3 oauthlib==3.1.0 openapi-schema-validator==0.1.5 openapi-spec-validator==0.3.1 oscrypto==1.2.1 packaging==20.9 pandas==1.2.4 pandas-gbq==0.15.0 papermill==2.3.3 paramiko==2.7.2 parso==0.8.2 pathspec==0.8.1 pendulum==2.1.2 pexpect==4.8.0 pickleshare==0.7.5 Pillow==8.2.0 prison==0.1.3 prometheus-client==0.8.0 prompt-toolkit==3.0.18 proto-plus==1.18.1 protobuf==3.17.1 psutil==5.8.0 psycopg2-binary==2.8.6 ptyprocess==0.7.0 py4j==0.10.9 pyarrow==3.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.20 pycryptodomex==3.10.1 pydata-google-auth==1.2.0 Pygments==2.9.0 PyJWT==1.7.1 pymongo==3.11.4 PyNaCl==1.4.0 pyOpenSSL==20.0.1 pyparsing==2.4.7 pyrsistent==0.17.3 pysftp==0.2.9 python-daemon==2.3.0 python-dateutil==2.8.1 python-editor==1.0.4 python-nvd3==0.15.0 python-slugify==4.0.1 python3-openid==3.2.0 pytz==2021.1 pytzdata==2020.1 PyYAML==5.4.1 pyzmq==22.1.0 redis==3.5.3 regex==2021.4.4 requests==2.25.1 requests-oauthlib==1.3.0 rfc3986==1.5.0 rich==10.2.2 rsa==4.7.2 s3transfer==0.3.7 scikit-learn==0.24.1 scipy==1.6.3 setproctitle==1.2.2 simple-salesforce==1.11.1 six==1.16.0 slack-sdk==3.5.1 sniffio==1.2.0 snowflake-connector-python==2.4.3 snowflake-sqlalchemy==1.2.4 SQLAlchemy==1.3.23 SQLAlchemy-JSONField==1.0.0 SQLAlchemy-Utils==0.37.4 sqlparse==0.4.1 sshtunnel==0.1.5 swagger-ui-bundle==0.0.8 tableauserverclient==0.15.0 tabulate==0.8.9 tenacity==6.2.0 termcolor==1.1.0 text-unidecode==1.3 textwrap3==0.9.2 threadpoolctl==2.1.0 toml==0.10.2 tornado==6.1 tqdm==4.61.0 traitlets==5.0.5 typed-ast==1.4.3 typing-extensions==3.10.0.0 typing-inspect==0.6.0 unicodecsv==0.14.1 uritemplate==3.0.1 urllib3==1.25.11 vine==1.3.0 watchtower==0.7.3 wcwidth==0.2.5 Werkzeug==1.0.1 WTForms==2.3.3 yarl==1.6.3 zipp==3.4.1 ```
https://github.com/apache/airflow/issues/16148
https://github.com/apache/airflow/pull/16424
cbf8001d7630530773f623a786f9eb319783b33c
d1d02b62e3436dedfe9a2b80cd1e61954639ca4d
"2021-05-28T18:23:20Z"
python
"2021-06-16T09:29:45Z"
closed
apache/airflow
https://github.com/apache/airflow
16,138
["airflow/www/utils.py", "tests/www/test_utils.py"]
doc_md code block collapsing lines
**Apache Airflow version**: 2.0.0 - 2.1.0 **Kubernetes version**: N/A **Environment**: - **Cloud provider or hardware configuration**: Docker on MacOS (but also AWS ECS deployed) - **OS** (e.g. from /etc/os-release): MacOS Big Sur 11.3.1 - **Kernel** (e.g. `uname -a`): Darwin Kernel Version 20.4.0 - **Install tools**: - **Others**: **What happened**: When a code block is a part of a DAG's `doc_md`, it does not render correctly in the Web UI, but collapses all the lines into one line instead. **What you expected to happen**: The multi line code block be rendered with line breaks preserved. **How to reproduce it**: Create a DAG with `doc_md` containing a code block: ````python from airflow import DAG DOC_MD = """\ # Markdown code block Inline `code` works well. ``` Code block does not respect newlines ``` """ dag = DAG( dag_id='test', doc_md=DOC_MD ) ```` The rendered documentation looks like this: <img src="https://user-images.githubusercontent.com/11132999/119981579-19a70600-bfbe-11eb-8036-7d981ae1f232.png" width="50%"/> **Anything else we need to know**: N/A
https://github.com/apache/airflow/issues/16138
https://github.com/apache/airflow/pull/16414
15ff2388e8a52348afcc923653f85ce15a3c5f71
6f9c0ceeb40947c226d35587097529d04c3e3e59
"2021-05-28T12:33:59Z"
python
"2021-06-13T00:30:11Z"
closed
apache/airflow
https://github.com/apache/airflow
16,090
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "tests/core/test_configuration.py"]
Contradictory default in store_dag configuration reference
https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#store-dag-code ![Captura de pantalla 2021-05-26 a las 17 47 56](https://user-images.githubusercontent.com/11339132/119691160-a2049a00-be4a-11eb-8f55-0ad2c6117620.png) The Default is True or None?
https://github.com/apache/airflow/issues/16090
https://github.com/apache/airflow/pull/16093
57bd6fb2925a7d505a80b83140811b94b363f49c
bff213e07735d1ee45101f85b01b3d3a97cddbe5
"2021-05-26T15:49:01Z"
python
"2021-06-07T08:47:24Z"
closed
apache/airflow
https://github.com/apache/airflow
16,078
["airflow/jobs/scheduler_job.py", "airflow/models/taskinstance.py", "tests/jobs/test_scheduler_job.py", "tests/models/test_taskinstance.py"]
Queued tasks become running after dagrun is marked failed
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.1.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): centos7 - **Kernel** (e.g. `uname -a`): 3.10.0 - **Install tools**: - **Others**: **What happened**: <!-- (please include exact error messages if you can) --> A dagrun has some tasks which are in running and queued status because the concurrency limit. After I mark dagrun as **failed**, the running tasks turn **failed** while the queued tasks turn **running**. **What you expected to happen**: <!-- What do you think went wrong? --> The queued tasks should turn **failed** instead of **running** **How to reproduce it**: <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> - in airflow.cfg set worker_concurrency=8, dag_concurrency=64 - create a dag with 100 BashOperator tasks which are all independent, with a bash command "sleep 1d" - run the dag, and will see 8 tasks running, 56 queued and 36 scheduled - mark the dagrun as failed, and will see 8 running tasks are set failed, but another 8 are set running and the rest 84 are set no_status. If the dagrun is marked failed again, this process will be repeated again. **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/16078
https://github.com/apache/airflow/pull/19095
561610b1f00daaac2ad9870ba702be49c9764fe7
8d703ae7db3c2a08b94c824a6f4287c3dd29cebf
"2021-05-26T03:56:09Z"
python
"2021-10-20T14:10:39Z"
closed
apache/airflow
https://github.com/apache/airflow
16,061
["airflow/utils/log/secrets_masker.py"]
Consider and add common sensitive names
**Description** Since sensitive informations in the connection object (specifically the extras field) are now being masked based on sensitive key names, we should consider adding some common sensitive key names. `private_key` from [ssh connection](https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html) is an examples. **Use case / motivation** Extras field used to be blocked out entirely before the sensitive value masking feature (#15599). [Before in 2.0.2](https://github.com/apache/airflow/blob/2.0.2/airflow/hooks/base.py#L78 ) and [after in 2.1.0](https://github.com/apache/airflow/blob/2.1.0/airflow/hooks/base.py#L78 ). Extras field containing sensitive information now shown unless the key contains sensitive names. **Are you willing to submit a PR?** @ashb has expressed interest in adding this.
https://github.com/apache/airflow/issues/16061
https://github.com/apache/airflow/pull/16392
5fdf7468ff856ba8c05ec20637ba5a145586af4a
430073132446f7cc9c7d3baef99019be470d2a37
"2021-05-25T16:49:39Z"
python
"2021-06-11T18:08:35Z"
closed
apache/airflow
https://github.com/apache/airflow
16,056
["chart/templates/_helpers.yaml", "chart/tests/test_git_sync_scheduler.py", "chart/tests/test_git_sync_webserver.py", "chart/tests/test_git_sync_worker.py", "chart/tests/test_pod_template_file.py", "chart/values.schema.json", "chart/values.yaml"]
[Helm] Resources for the git-sync sidecar
**Description** It would be nice to be able to specify resources for the `git-sync` sidecar in the helm chart values. **Use case / motivation** I don't want to use keda for autoscaling and would like to setup a HPA myself. However this is currently not possible since it is not possible to specify resources for the `git-sync` sidecar. **Are you willing to submit a PR?** Yes, I am willing to submit a PR. **Related Issues** Not that I know of.
https://github.com/apache/airflow/issues/16056
https://github.com/apache/airflow/pull/16080
6af963c7d5ae9b59d17b156a053d5c85e678a3cb
c90284d84e42993204d84cccaf5c03359ca0cdbd
"2021-05-25T15:02:45Z"
python
"2021-05-26T14:08:37Z"
closed
apache/airflow
https://github.com/apache/airflow
16,042
["airflow/www/static/css/flash.css", "airflow/www/static/css/main.css", "airflow/www/templates/appbuilder/flash.html"]
DAG Import Errors list items as collapsible spoiler-type at collapsed state
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** <!-- A short description of your feature --> Perform a DAG Import Errors list items as collapsible spoiler-type at collapsed state. Title of each spoiler block may be a first line of traceback error, dag_id or dag full filename (or pair of them) **Use case / motivation** <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> When amount of DAG import errors becomes huge(see screenshot below) it is hard to find a necessary import error or maybe, compare errors of different DAGs. Of course, it can be done by using of web page find.. but when un-collapsed list is huge, it is inconvient ![image](https://user-images.githubusercontent.com/45458080/119460544-78167f00-bd47-11eb-9ad9-39a949d9c78f.png) **Are you willing to submit a PR?** <!--- We accept contributions! --> **Related Issues** <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/16042
https://github.com/apache/airflow/pull/16072
4aaa8df51c23c8833f9fa11d445a4c5bab347347
62fe32590aab5acbcfc8ce81f297b1f741a0bf09
"2021-05-25T08:23:19Z"
python
"2021-05-25T19:48:35Z"
closed
apache/airflow
https://github.com/apache/airflow
16,039
["chart/templates/flower/flower-service.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/templates/webserver/webserver-service.yaml", "chart/tests/test_flower.py", "chart/tests/test_webserver.py", "chart/values.schema.json", "chart/values.yaml"]
Kubernetes liveliness probe fails when changing from default port for Airflow UI from 8080 to 80 in Helm Chart.
**Apache Airflow version**: 2.0.2. **Kubernetes version**: ``` Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.12", GitCommit:"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725", GitTreeState:"clean", BuildDate:"2020-05-06T05:09:48Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"} ``` **What happened**: I added the following block of code to the user values in the helm chart and because of that, the pod failed to start because the liveliness probe failed. ``` ports: airflowUI: 80 ``` ``` Normal Scheduled 3m19s default-scheduler Successfully assigned ucp/airflow-webserver-5c6dffbcd5-5crwg to ip-abcd.ap-south-1.compute.internal Normal Pulled 3m18s kubelet Container image "xyz" already present on machine Normal Created 3m18s kubelet Created container wait-for-airflow-migrations Normal Started 3m18s kubelet Started container wait-for-airflow-migrations Normal Pulled 3m6s kubelet Container image "xyz" already present on machine Normal Created 3m6s kubelet Created container webserver Normal Started 3m6s kubelet Started container webserver Warning Unhealthy 2m8s (x9 over 2m48s) kubelet Readiness probe failed: Get http://100.124.0.6:80/health: dial tcp 100.124.0.6:80: connect: connection refused Warning Unhealthy 2m4s (x10 over 2m49s) kubelet Liveness probe failed: Get http://100.124.0.6:80/health: dial tcp 100.124.0.6:80: connect: connection refused ``` **What you expected to happen**: The liveliness probe should pass. **How to reproduce it**: Just change the default port for airflowUI from 8080 to 80.
https://github.com/apache/airflow/issues/16039
https://github.com/apache/airflow/pull/16572
c2af5e3ca22eca7d4797b141520a97cf5e5cc879
8217db8cb4b1ff302c5cf8662477ac00f701e78c
"2021-05-25T07:57:13Z"
python
"2021-06-23T12:50:28Z"
closed
apache/airflow
https://github.com/apache/airflow
16,037
["airflow/operators/python.py", "airflow/utils/python_virtualenv.py", "tests/config_templates/requirements.txt", "tests/decorators/test_python_virtualenv.py", "tests/operators/test_python.py"]
allow using requirments.txt in PythonVirtualEnvOperator
Currently the operator allows to set requirement as list that needs to be hard coded. It would be nice if airflow can support reading from file directly (something similar to how operators read sql file)
https://github.com/apache/airflow/issues/16037
https://github.com/apache/airflow/pull/17349
cd4bc175cb7673f191126db04d052c55279ef7a6
b597ceaec9078b0ce28fe0081a196f065f600f43
"2021-05-25T07:47:15Z"
python
"2022-01-07T14:32:29Z"
closed
apache/airflow
https://github.com/apache/airflow
16,035
["airflow/sensors/base.py"]
GCSToLocalFilesystemOperator from Google providers pre 4.0.0 fails to import in airflow 2.1.0
The GCSToLocalFilesystemOperator in Google Provider <=3.0.0 had wrong import for apply_defaults. It used ``` from airflow.sensors.base_sensor_operator import apply_defaults ``` instead of ``` from airflow.utils.decorators import apply_defaults ``` When we removed `apply_defaults` in #15667, the base_sensor_operator import was removed as well which made the GCSToLocalFilestystemOperator stops working in 2.1.0 The import in base_sensor_operator will be restored in 2.1.1 and Google Provider 4.0.0 will work without problems after it is released. Workaround for 2.1.0 Airflow is to copy the code of the operator to DAG and use it temporarily until new versions are released.
https://github.com/apache/airflow/issues/16035
https://github.com/apache/airflow/pull/16040
71ef2f2ee9ccf238a99cb0e42412d2118bad22a1
0f8f66eb6bb5fe7f91ecfaa2e93d4c3409813b61
"2021-05-25T07:01:35Z"
python
"2021-05-27T05:08:34Z"
closed
apache/airflow
https://github.com/apache/airflow
16,024
["airflow/www/static/js/tree.js"]
airflow 2.1.0 - squares with tasks are aligned far to the right
**Apache Airflow version**: 2.1.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.19.8 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: Opened the dag page. I saw that the task squares were shifted to the right to the end. The pop-up window with the details of the task goes off-screen. Additionally, a large diagonal monitor leaves a lot of empty space **What you expected to happen**: I believe that the alignment of the squares of the tasks should be closer to the center, as it was in version 2.0.2 **How to reproduce it**: open any page with a dag who has completed or scheduled tasks **Anything else we need to know**: ![airflow 2 0 2](https://user-images.githubusercontent.com/84713660/119367154-57ecae80-bcba-11eb-911c-c81367a461fc.png) ![airflow 2 1 0](https://user-images.githubusercontent.com/84713660/119367158-591ddb80-bcba-11eb-8970-40aeb042c536.png)
https://github.com/apache/airflow/issues/16024
https://github.com/apache/airflow/pull/16067
44345f3a635d3aef3bf98d6a3134e8820564b105
f2aa9b58cb012a3bc347f43baeaa41ecdece4cbf
"2021-05-24T15:03:37Z"
python
"2021-05-25T20:20:31Z"
closed
apache/airflow
https://github.com/apache/airflow
16,008
["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"]
GoogleCloudStorageToBigQueryOperator reads string as a list in parameter source_objects
**Apache Airflow version**:1.10.12 **Environment**: google cloud composer **What happened**: When using GoogleCloudStorageToBigQueryOperator and providing string as parameter source_objects, the process is iterating on a the string as a valid list. For example - `cloud_storage_to_bigquery = GoogleCloudStorageToBigQueryOperator( bucket = 'bucket', source_objects = 'abc', )` Will result in looking into the sources: bucket/a, bucket/b, bucket/c. **What you expected to happen**: Throw an error on type (string instead of list).
https://github.com/apache/airflow/issues/16008
https://github.com/apache/airflow/pull/16160
b7d1039b60f641e78381fbdcc33e68d291b71748
99d1535287df7f8cfced39baff7a08f6fcfdf8ca
"2021-05-23T09:34:41Z"
python
"2021-05-31T05:06:44Z"
closed
apache/airflow
https://github.com/apache/airflow
16,007
["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"]
Masking passwords with empty connection passwords make some logs unreadable in 2.1.0
Discovered in this [Slack conversation](https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1621752408213700). When you have connections with empty passwords masking logs masks all the character breaks: ``` [2021-05-23 04:00:23,309] {{logging_mixin.py:104}} WARNING - ***-***-***-*** ***L***o***g***g***i***n***g*** ***e***r***r***o***r*** ***-***-***-*** [2021-05-23 04:00:23,309] {{logging_mixin.py:104}} WARNING - ***T***r***a***c***e***b***a***c***k*** ***(***m***o***s***t*** ***r***e***c***e***n***t*** ***c***a***l***l*** ***l***a***s***t***)***:*** [2021-05-23 04:00:23,309] {{logging_mixin.py:104}} WARNING - *** *** ***F***i***l***e*** ***"***/***u***s***r***/***l***o***c***a***l***/***l***i***b***/***p***y***t***h***o***n***3***.***8***/***l***o***g***g***i***n***g***/***_***_***i***n***i***t***_***_***.***p***y***"***,*** ***l***i***n***e*** ***1***0***8***1***,*** ***i***n*** ***e***m***i***t*** *** *** *** *** ***m***s***g*** ***=*** ***s***e***l***f***.***f***o***r***m***a***t***(***r***e***c***o***r***d***)*** ``` Until this is fixed, an easy workaround is to disable masking via disabling sensitive connection masking in configuration: ``` [core] hide_sensitive_var_conn_fields = False ``` or vial env variable: ``` AIRFLOW__CORE__HIDE_SENSITIVE_VAR_CONN_FIELDS="False" ``` This is only happening if the task accesses the connection that has empty password. However there are a number of cases where such an empty password might be "legitimate" - for example in `google` provider you might authenticate using env variable or workload identity and connection will contain an empty password then.
https://github.com/apache/airflow/issues/16007
https://github.com/apache/airflow/pull/16057
9c98a60cdd29f0b005bf3abdbfc42aba419fded8
8814a59a5bf54dd17aef21eefd0900703330c22c
"2021-05-23T08:41:10Z"
python
"2021-05-25T18:31:22Z"
closed
apache/airflow
https://github.com/apache/airflow
16,000
["chart/templates/secrets/elasticsearch-secret.yaml", "chart/templates/secrets/metadata-connection-secret.yaml", "chart/templates/secrets/pgbouncer-stats-secret.yaml", "chart/templates/secrets/redis-secrets.yaml", "chart/templates/secrets/result-backend-connection-secret.yaml", "chart/tests/test_elasticsearch_secret.py", "chart/tests/test_metadata_connection_secret.py", "chart/tests/test_redis.py", "chart/tests/test_result_backend_connection_secret.py"]
If external postgres password contains '@' then it appends it to host.
**What happened:** My password for external Postgres RDS contained '@123' at the end which got appended to the host of the DB due to some bug. One can notice in the logs, the DB_HOST has an unwanted 123@ in the front of it DB_HOST=**123@**{{postgres_host}}. I removed '@' character from the password and it worked fine. I am using the latest image of apache/airflow and using the official helm chart. ``` kc logs airflow-run-airflow-migrations-xxx BACKEND=postgresql DB_HOST=123@{{postgres_host}} DB_PORT=5432 .................... ERROR! Maximum number of retries (20) reached. Last check result: $ run_nc '123@{{postgres_host}}' '5432' Traceback (most recent call last): File "<string>", line 1, in <module> socket.gaierror: [Errno -2] Name or service not known Can't parse as an IP address ``` **Steps to reproduce:** One can easily reproduce this by using a password that contains the '@' character in it. ``` data: metadataConnection: user: {{postgres_airflow_username}} pass: {{postgres_airflow_password}} protocol: postgresql host: {{postgres_host}} port: 5432 db: {{postgres_airflow_dbname}} ``` **Expected behavior:** Migrations should run irrespective if the Postgres password contains an @ character or not.
https://github.com/apache/airflow/issues/16000
https://github.com/apache/airflow/pull/16004
26840970718228d1484142f0fe06f26bc91566cc
ce358b21533eeb7a237e6b0833872bf2daab7e30
"2021-05-22T21:06:26Z"
python
"2021-05-23T17:07:19Z"
closed
apache/airflow
https://github.com/apache/airflow
15,994
[".pre-commit-config.yaml", "airflow/sensors/base.py", "airflow/utils/orm_event_handlers.py", "dev/breeze/src/airflow_breeze/commands/production_image_commands.py", "scripts/ci/libraries/_sanity_checks.sh", "scripts/in_container/run_system_tests.sh", "tests/conftest.py"]
Use inclusive words in Apache Airflow project
**Description** Apache Software Foundation is discussing how we can improve inclusiveness of projects and raise awareness of conscious language. Related thread on [email protected]: https://lists.apache.org/thread.html/r2d8845d9c37ac581046997d980464e8a7b6bffa6400efb0e41013171%40%3Cdiversity.apache.org%3E **Use case / motivation** We already have pre-commit check that checks for some word. However, on [CLC (Conscious Language Checker)](https://clcdemo.net/analysis.html?project=airflow.git) Apache Airflow seems to have problems with the following words: - he - her - him - his - master - sanity check - slave - whitelist (pylintrc) **Are you willing to submit a PR?** <!--- We accept contributions! --> **Related Issues** <!-- Is there currently another issue associated with this? --> #12982 https://github.com/apache/airflow/pull/9175
https://github.com/apache/airflow/issues/15994
https://github.com/apache/airflow/pull/23090
9a6baab5a271b28b6b3cbf96ffa151ac7dc79013
d7b85d9a0a09fd7b287ec928d3b68c38481b0225
"2021-05-21T18:31:42Z"
python
"2022-05-09T21:52:29Z"
closed
apache/airflow
https://github.com/apache/airflow
15,976
["airflow/www/widgets.py"]
Error when querying on the Browse view with empty date picker
**Apache Airflow version**: 2.0.2 **What happened**: Under Browse, when querying with any empty datetime fields, I received the mushroom cloud. ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps return f(self, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/views.py", line 551, in list widgets = self._list() File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 1127, in _list page_size=page_size, File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/baseviews.py", line 1026, in _get_list_widget page_size=page_size, File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 425, in query count = self.query_count(query, filters, select_columns) File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 347, in query_count query, filters, select_columns=select_columns, aliases_mapping={} File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 332, in _apply_inner_all query = self.apply_filters(query, inner_filters) File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 187, in apply_filters return filters.apply_all(query) File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/models/filters.py", line 298, in apply_all query = flt.apply(query, value) File "/usr/local/lib/python3.7/site-packages/airflow/www/utils.py", line 373, in apply value = timezone.parse(value, timezone=timezone.utc) File "/usr/local/lib/python3.7/site-packages/airflow/utils/timezone.py", line 173, in parse return pendulum.parse(string, tz=timezone or TIMEZONE, strict=False) # type: ignore File "/usr/local/lib/python3.7/site-packages/pendulum/parser.py", line 29, in parse return _parse(text, **options) File "/usr/local/lib/python3.7/site-packages/pendulum/parser.py", line 45, in _parse parsed = base_parse(text, **options) File "/usr/local/lib/python3.7/site-packages/pendulum/parsing/__init__.py", line 74, in parse return _normalize(_parse(text, **_options), **_options) File "/usr/local/lib/python3.7/site-packages/pendulum/parsing/__init__.py", line 120, in _parse return _parse_common(text, **options) File "/usr/local/lib/python3.7/site-packages/pendulum/parsing/__init__.py", line 177, in _parse_common return date(year, month, day) ValueError: year 0 is out of range ``` **What you expected to happen**: Perhaps give a warning/error banner that indicate Airflow cannot perform the search with bad input. I think it'll also work if the datetime picker defaults the timestamp to the current time. It looks like some fields are equipped to do that but not all. **How to reproduce it**: 1. Go under Browse 2. Try to query with empty datetime picket **Anything else we need to know**: ![Screen Shot 2021-05-20 at 5 12 54 PM](https://user-images.githubusercontent.com/5952735/119063940-12b13f80-b98f-11eb-9b6f-a4d5c396e971.png) ![Screen Shot 2021-05-20 at 5 13 36 PM](https://user-images.githubusercontent.com/5952735/119063945-1349d600-b98f-11eb-91cd-92d813414eba.png) ![Screen Shot 2021-05-20 at 5 12 35 PM](https://user-images.githubusercontent.com/5952735/119063948-13e26c80-b98f-11eb-945f-1439a263fc58.png) ![Screen Shot 2021-05-20 at 5 14 17 PM](https://user-images.githubusercontent.com/5952735/119063949-147b0300-b98f-11eb-8e8c-d5ee1e23bfc1.png) ![Screen Shot 2021-05-20 at 5 14 37 PM](https://user-images.githubusercontent.com/5952735/119063950-147b0300-b98f-11eb-9055-c89518bf8524.png) ![Screen Shot 2021-05-20 at 5 15 01 PM](https://user-images.githubusercontent.com/5952735/119063951-147b0300-b98f-11eb-8323-7602bf673205.png)
https://github.com/apache/airflow/issues/15976
https://github.com/apache/airflow/pull/18602
0a37be3e3cf9289f63f1506bc31db409c2b46738
d74e6776fce1da2c887e33d79e2fb66c83c6ff82
"2021-05-21T00:17:06Z"
python
"2021-09-30T19:52:54Z"
closed
apache/airflow
https://github.com/apache/airflow
15,946
["airflow/task/task_runner/base_task_runner.py"]
Web UI not displaying the log when task fails - Permission Denied at temporary error file when using run_as_user
**Apache Airflow version**: 2.0.1 **Environment**: 2 Worker nodes and 1 Master - **Cloud provider or hardware configuration**: Oracle Cloud - **OS** (e.g. from /etc/os-release): Oracle Linux 7.8 - **Kernel**: Linux 4.14.35-1902.302.2.el7uek.x86_64 #2 SMP Fri Apr 24 14:24:11 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux **What happened**: When a task fails, the Web UI doesn't display the log. The URL to get the log is presented without the hostname. When we navigate to the log path and open the .log file in the OS, it shows a permission error when opening the temporary file generated to dump the error. I noticed when we create the temporary file using NamedTemporaryFile it creates a restricted file, open only for reading. It can be written only by the user airflow. If any other user tries to write in the file, the Permission Error is raised. The message that is displayed at the UI is: ``` *** Log file does not exist: /path/to/log/1.log *** Fetching from: http://:8793/log/path/to/log/1.log *** Failed to fetch log file from worker. Invalid URL 'http://:8793/log/path/to/log/1.log': No host supplied ``` We can see the hostname is not obtained when building the URL since the execution fails when dumping the error into the temporary file. When we access the log in the OS, the full log is there but it shows the Permission Denied: ```PermissionError: [Errno 13] Permission denied: '/tmp/tmpmg2q49a8'``` **What you expected to happen**: The print from the Web UI when the task fails: ![image](https://user-images.githubusercontent.com/63886802/118839942-53c92700-b89d-11eb-94ba-d7dd482717db.png) The print from the Log file, showing the Permission Denied error when accessing the tmp file: ![image](https://user-images.githubusercontent.com/63886802/118840062-6e030500-b89d-11eb-8e5b-282e5683d507.png) **Anything else we need to know**: The errors occurs every time a task fails and the run_as_user and owner is not airflow. When the task does succeed, the log is normal at the Web Ui. I've added a os.chmod to the self._error_file at base_task_runner, after the NamedTemporaryFile is create, using the umask 0o0777 and now the logs are appearing normally, even when the task fails. I pretend to create a PR adding that line of code but it depends if the community believes that opening up the permissions for the temp file is ok. As far as i know, i didn't noticed any sensitive informations or possible vulnerabilities from this change. It's important to say that the task fails not because of that problem. The problem is that the log is inaccessible through the Web UI, which can slow down troubleshootings and so on.
https://github.com/apache/airflow/issues/15946
https://github.com/apache/airflow/pull/15947
48316b9d17a317ddf22f60308429ce089585fb02
31b15c94886c6083a6059ca0478060e46db67fdb
"2021-05-19T15:33:41Z"
python
"2021-09-03T12:15:36Z"
closed
apache/airflow
https://github.com/apache/airflow
15,941
["docs/apache-airflow/start/docker.rst"]
Detect and inform the users in case there is not enough memory/disk for Docker Quick-start
Default amount of memory/disk size on MacOS is not enough usually to run Airfllow. We already detect and provide informative message about it when we start Breeze and provide informative messages: https://github.com/apache/airflow/blob/master/scripts/ci/libraries/_docker_engine_resources.sh I believe we should do the same for the quickstart as many of Mac users raise the ``cannot start`` issue which gets fixed after the memory is increased. Example here: https://github.com/apache/airflow/issues/15927
https://github.com/apache/airflow/issues/15941
https://github.com/apache/airflow/pull/15967
deececcabc080844ca89272a2e4ab1183cd51e3f
ce778d383e2df2857b09e0f1bfe279eecaef3f8a
"2021-05-19T13:37:31Z"
python
"2021-05-20T11:44:02Z"
closed
apache/airflow
https://github.com/apache/airflow
15,900
["chart/files/pod-template-file.kubernetes-helm-yaml", "chart/templates/_helpers.yaml", "chart/tests/test_pod_template_file.py"]
Chart: Extra mounts with DAG persistence and gitsync
**What happened**: When you have `dag.persistence` enabled and a `dag.gitSync.sshKeySecret` set, the gitSync container isn't added to the pod_template_file for k8s workers, as expected. However, `volumes` for it still are and maybe worse, the ssh key is mounted into the Airflow worker. **What you expected to happen**: When using `dag.persistence` and a `dag.gitSync.sshKeySecret`, nothing gitsync related is added to the k8s workers. **How to reproduce it**: Deploy the helm chart with `dag.persistence` enabled and a `dag.gitSync.sshKeySecret`. e.g: ``` dags: persistence: enabled: true gitSync: enabled: true repo: {some_repo} sshKeySecret: my-gitsync-secret extraSecrets: 'my-gitsync-secret': data: | gitSshKey: {base_64_private_key} ``` **Anything else we need to know**: After a quick look at CeleryExecutor workers, I don't think they are impacted, but worth double checking.
https://github.com/apache/airflow/issues/15900
https://github.com/apache/airflow/pull/15925
9875f640ca19dabd846c17f4278ccc90e189ae8d
8084cfbb36ec1da47cc6b6863bc08409d7387898
"2021-05-17T20:26:57Z"
python
"2021-05-21T23:17:02Z"
closed
apache/airflow
https://github.com/apache/airflow
15,888
["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"]
Abort a DAG Run
**Description** It would be great having a option to abort a DAG Runs through the REST API. **Use case / motivation** The proposed input params would be: - DAG_ID - DAG_RUN_ID The DAG Run should abort all its tasks running and mark them as "failed". **Are you willing to submit a PR?** **Related Issues**
https://github.com/apache/airflow/issues/15888
https://github.com/apache/airflow/pull/17839
430976caad5970b718e3dbf5899d4fc879c0ac89
ab7658147445161fa3f7f2b139fbf9c223877f77
"2021-05-17T11:00:22Z"
python
"2021-09-02T19:32:45Z"
closed
apache/airflow
https://github.com/apache/airflow
15,886
["docs/apache-airflow/howto/operator/python.rst"]
Adding support for --index-url (or) --extra-index-url for PythonVirtualenvOperator
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** <!-- A short description of your feature --> **Use case / motivation** <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> **Are you willing to submit a PR?** <!--- We accept contributions! --> **Related Issues** <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/15886
https://github.com/apache/airflow/pull/20048
9319a31ab11e83fd281b8ed5d8469b038ddad172
7627de383e5cdef91ca0871d8107be4e5f163882
"2021-05-17T09:10:59Z"
python
"2021-12-05T21:49:25Z"
closed
apache/airflow
https://github.com/apache/airflow
15,885
["CHANGELOG.txt", "airflow/api_connexion/schemas/task_instance_schema.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"]
Internal error on API REST /api/v1/dags/axesor/updateTaskInstancesState
**Apache Airflow version**: 2.0.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Running on Docker 19.03.13 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): Windows 10 Enterprise - **Kernel**: - **Install tools**: - **Others**: **What happened**: I receive an HTTP Error 500 when changing tasks status through the REST API. **What you expected to happen**: I expected to receive a HTTP 200. **How to reproduce it**: First, we trigger a new Dag Run: ``` dag_id = 'test' run_id = 1000 r = requests.post('http://localhost:8080/api/v1/dags/' + dag_id + '/dagRuns', json={"dag_run_id": str(run_id), "conf": { } }, auth=HTTPBasicAuth('airflow', 'airflow')) if r.status_code == 200: print("Dag started with run_id", run_id) ``` Then we try to abort the DAG Run: ``` r = requests.get('http://localhost:8080/api/v1/dags/' + dag_id + '/dagRuns/' + str(run_id) + '/taskInstances?state=running', auth=HTTPBasicAuth('airflow', 'airflow')) task_id = r.json()['task_instances'][0]['task_id'] execution_date = r.json()['task_instances'][0]['execution_date'] r = requests.post('http://localhost:8080/api/v1/dags/' + dag_id + '/updateTaskInstancesState', json={"task_id": str(task_id), "execution_date": str(execution_date), "include_upstream": True, "include_downstream": True, "include_future": True, "include_past": False, "new_state": "failed" }, auth=HTTPBasicAuth('airflow', 'airflow')) print(r.status_code) ``` **Anything else we need to know**: This is the server side track: ``` Something bad has happened. Please consider letting us know by creating a <b><a href="https://github.com/apache/airflow/issues/new/choose">bug report using GitHub</a></b>. Python version: 3.6.13 Airflow version: 2.0.2 Node: c8d75444cd4a ------------------------------------------------------------------------------- Traceback (most recent call last): File &#34;/home/airflow/.local/lib/python3.6/site-packages/flask/app.py&#34;, line 2447, in wsgi_app response = self.full_dispatch_request() File &#34;/home/airflow/.local/lib/python3.6/site-packages/flask/app.py&#34;, line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File &#34;/home/airflow/.local/lib/python3.6/site-packages/flask/app.py&#34;, line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File &#34;/home/airflow/.local/lib/python3.6/site-packages/flask/_compat.py&#34;, line 39, in reraise raise value File &#34;/home/airflow/.local/lib/python3.6/site-packages/flask/app.py&#34;, line 1950, in full_dispatch_request rv = self.dispatch_request() File &#34;/home/airflow/.local/lib/python3.6/site-packages/flask/app.py&#34;, line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File &#34;/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/decorator.py&#34;, line 48, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/uri_parsing.py&#34;, line 144, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/validation.py&#34;, line 184, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/validation.py&#34;, line 384, in wrapper return function(request) File &#34;/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/response.py&#34;, line 103, in wrapper response = function(request) File &#34;/home/airflow/.local/lib/python3.6/site-packages/connexion/decorators/parameter.py&#34;, line 121, in wrapper return function(**kwargs) File &#34;/home/airflow/.local/lib/python3.6/site-packages/airflow/api_connexion/security.py&#34;, line 47, in decorated return func(*args, **kwargs) File &#34;/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/session.py&#34;, line 70, in wrapper return func(*args, session=session, **kwargs) File &#34;/home/airflow/.local/lib/python3.6/site-packages/airflow/api_connexion/endpoints/task_instance_endpoint.py&#34;, line 314, in post_set_task_instances_state commit=not data[&#34;dry_run&#34;], KeyError: &#39;dry_run&#39; ``` With every call.
https://github.com/apache/airflow/issues/15885
https://github.com/apache/airflow/pull/15889
821ea6fc187a9780b8fe0dd76f140367681ba065
ac3454e4f169cdb0e756667575153aca8c1b6981
"2021-05-17T09:01:11Z"
python
"2021-05-17T14:15:11Z"
closed
apache/airflow
https://github.com/apache/airflow
15,834
["airflow/dag_processing/manager.py", "docs/apache-airflow/logging-monitoring/metrics.rst"]
Metrics documentation fixes and deprecations
**Apache Airflow version**: 2.0.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A **Environment**: N/A - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: * `dag_processing.last_runtime.*` - In version 1.10.6 [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md#airflow-1106) it was indicated that this metrics will be removed in 2.0. It was not removed from the metrics documentation. Also the metrics documentation doesn't mention it supposed to be removed/deprecated, it's documented as a gauge but it is actually a timer (reported https://github.com/apache/airflow/issues/10091). * `dag_processing.processor_timeouts`: documented as a guage but it is actually a counter. Again from https://github.com/apache/airflow/issues/10091. * `dag_file_processor_timeouts` - indicated as supposed to be removed in 2.0, was not removed from [code](https://github.com/apache/airflow/blob/37d549/airflow/utils/dag_processing.py#L1169) but removed from docs. * Would be nice if documentation of 1.10.15 indicated the deprecated metrics more clearly, not only in `UPDATING.md`. **What you expected to happen**: * The Metrics page should document all metrics being emitted by Airflow. * The Metrics page should correctly document the type of the metric. **How to reproduce it**: <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> Check official [Metrics Docs](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html?highlight=metrics#) **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/15834
https://github.com/apache/airflow/pull/27067
6bc05671dbcfb38881681b656370d888e6300e26
5890b083b1dcc082ddfa34e9bae4573b99a54ae3
"2021-05-14T00:50:54Z"
python
"2022-11-19T03:47:40Z"
closed
apache/airflow
https://github.com/apache/airflow
15,832
["airflow/www/static/js/dag.js", "airflow/www/static/js/dags.js", "airflow/www/templates/airflow/dag.html", "airflow/www/templates/airflow/dags.html"]
2.1 Airflow UI (Delete DAG) button is not working
On Airflow UI for DAG delete button is not working as expected. Airflow version: 2.1.0 **What happened:** When we click on Delete DAG button for any DAG it sholud delete but geting 404 error page on platform and local. **What you expected to happen:** <img width="1771" alt="Screen Shot 2021-05-13 at 3 30 50 PM" src="https://user-images.githubusercontent.com/47584863/118195521-365d0e80-b400-11eb-9453-d3030e011155.png"> When we click on Delete DAG button for any DAG it sholud delete. **How to reproduce it:** Go to Airflow UI page select any DAG, right side of the page there will be Delete DAG button in red colour. <img width="1552" alt="Screen Shot 2021-05-13 at 3 17 14 PM" src="https://user-images.githubusercontent.com/47584863/118195296-cc446980-b3ff-11eb-86c3-964e32d79f89.png">
https://github.com/apache/airflow/issues/15832
https://github.com/apache/airflow/pull/15836
51e54cb530995edbb6f439294888a79724365647
634c12d08a8097bbb4dc7173dd56c0835acda735
"2021-05-13T22:31:45Z"
python
"2021-05-14T06:07:40Z"
closed
apache/airflow
https://github.com/apache/airflow
15,815
["airflow/providers/docker/CHANGELOG.rst", "airflow/providers/docker/example_dags/example_docker_copy_data.py", "airflow/providers/docker/operators/docker.py", "airflow/providers/docker/operators/docker_swarm.py", "airflow/providers/docker/provider.yaml", "docs/conf.py", "docs/exts/docs_build/third_party_inventories.py", "tests/providers/docker/operators/test_docker.py", "tests/providers/docker/operators/test_docker_swarm.py"]
New syntax to mount Docker volumes with --mount
I had this after reading #12537 and #9047. Currently `DockerOperator`’s `volumes` argument is passed directly to docker-py’s `bind` (aka `docker -v`). But `-v`’s behaviour has long been problematic, and [Docker has been pushing users to the new `--mount` syntax instead](https://docs.docker.com/storage/bind-mounts/#choose-the--v-or---mount-flag). With #12537, it seems like `-v`’s behaviour is also confusing to some Airflow users, so I want to migrate Airflow’s internals to `--mount`. However, `--mount` has a different syntax to `-v`, and the behaviour is also slightly different, so for compatibility reasons we can’t just do it under the hood. I can think of two possible solutions to this: A. Deprecate `volumes` altogether and introduce `DockerOperator(mounts=...)` This will emit a deprecation warning when the user passes `DockerOperator(volumes=...)` to tell them to convert to `DockerOperator(mounts=...)` instead. `volumes` will stay unchanged otherwise, and continue to be passed to bind mounts. `mounts` will take a list of [`docker.types.Mount`](https://docker-py.readthedocs.io/en/stable/api.html#docker.types.Mount) to describe the mounts. They will be passed directly to the mounts API. Some shorthands could be useful as well, for example: ```python DockerOperator( ... mounts=[ ('/root/data1', './data1'), # Source and target, default to volume mount. ('/root/data2', './data2', 'bind'), # Bind mount. ], ) ``` B. Reuse `volumes` and do introspection to choose between binds and mounts The `volumes` argument can be augmented to also accept `docker.types.Mount` instances, and internally we’ll do something like ```python binds = [] mounts = [] for vol in volumes: if isinstance(vol, str): binds.append(vol) elif isintance(vol, docker.types.Mount): mounts.append(vol) else: raise ValueError('...') if binds: warnings.warn('...', DeprecationWarning) ``` and pass the collected lists to binds and mounts respectively. I’m very interested in hearing thoughts on this. **Are you willing to submit a PR?** Yes **Related Issues** * #12537: Confusing on the bind syntax. * #9047: Implement mounting in `DockerSwarmOperator` (it’s a subclass of `DockerOperator`, but the `volumes` option is currently unused).
https://github.com/apache/airflow/issues/15815
https://github.com/apache/airflow/pull/15843
ac3454e4f169cdb0e756667575153aca8c1b6981
12995cfb9a90d1f93511a4a4ab692323e62cc318
"2021-05-13T06:28:57Z"
python
"2021-05-17T15:03:18Z"
closed
apache/airflow
https://github.com/apache/airflow
15,783
["airflow/providers/alibaba/cloud/log/oss_task_handler.py", "airflow/providers/amazon/aws/log/s3_task_handler.py", "airflow/providers/google/cloud/log/gcs_task_handler.py", "airflow/providers/microsoft/azure/log/wasb_task_handler.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/log_reader.py", "airflow/www/static/js/ti_log.js", "tests/api_connexion/endpoints/test_log_endpoint.py", "tests/providers/google/cloud/log/test_gcs_task_handler.py", "tests/utils/log/test_log_reader.py"]
Auto-refresh of logs.
**Description** Auto-refresh of logs. **Use case / motivation** Similar process that is already implemented in the Graph View, have the logs to auto-refresh so it's easier to keep track of the different processes in the UI. Thank you in advance!
https://github.com/apache/airflow/issues/15783
https://github.com/apache/airflow/pull/26169
07fe356de0743ca64d936738b78704f7c05774d1
1f7b296227fee772de9ba15af6ce107937ef9b9b
"2021-05-11T22:54:11Z"
python
"2022-09-18T21:06:22Z"
closed
apache/airflow
https://github.com/apache/airflow
15,768
["scripts/in_container/prod/entrypoint_prod.sh"]
PythonVirtualenvOperator fails with error from pip execution: Can not perform a '--user' install.
**Apache Airflow version**: 2.0.2 **Environment**: - **Hardware configuration**: Macbook Pro 2017 - **OS**: macOS X Catalina 10.15.7 - **Kernel**: Darwin 19.6.0 - **Others**: Docker 20.10.6, docker-compose 1.29.1 **What happened**: Running the demo `example_python_operator` dag fails on the `virtualenv_python` step. The call to pip via subprocess fails: `subprocess.CalledProcessError: Command '['/tmp/venvt3_qnug6/bin/pip', 'install', 'colorama==0.4.0']' returned non-zero exit status 1.` The error coming from pip is: `ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.` **What you expected to happen**: The call to pip succeeds, and the colorama dependency is installed into the virtualenv without attempting to install to user packages. The `example_python_operator` dag execution succeeds. **How to reproduce it**: Setup airflow 2.0.2 in docker as detailed in the Quickstart guide: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html Once running, enable and manually trigger the `example_python_operator` dag via the webUI. The dag will fail at the `virtualenv_python` task. **Anything else we need to know**: Not a problem with the Airflow 2.0.1 docker-compose. Fairly certain this is due to the addition of the `PIP_USER` environment variable being set to `true` in this PR: https://github.com/apache/airflow/pull/14125 My proposed solution would be to prepend `PIP_USER=false` to the construction of the call to pip within `utils/python_virtualenv.py` here: https://github.com/apache/airflow/blob/25caeda58b50eae6ef425a52e794504bc63855d1/airflow/utils/python_virtualenv.py#L30
https://github.com/apache/airflow/issues/15768
https://github.com/apache/airflow/pull/15774
996965aad9874e9c6dad0a1f147d779adc462278
533f202c22a914b881dc70ddf673ec81ffc8efcd
"2021-05-10T20:34:08Z"
python
"2021-05-11T09:17:09Z"
closed
apache/airflow
https://github.com/apache/airflow
15,748
["airflow/cli/commands/task_command.py"]
airflow tasks run --ship-dag not able to generate pickeled dag
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): No **Environment**: - **Cloud provider or hardware configuration**: local machine - **OS** (e.g. from /etc/os-release): 18.04.5 LTS (Bionic Beaver) - **Kernel** (e.g. `uname -a`): wsl2 **What happened**: Getting Pickled_id: None ``` root@12c7fd58e084:/opt/airflow# airflow tasks run example_bash_operator runme_0 now --ship-dag --interactive [2021-05-09 13:11:33,247] {dagbag.py:487} INFO - Filling up the DagBag from /files/dags Running <TaskInstance: example_bash_operator.runme_0 2021-05-09T13:11:31.788923+00:00 [None]> on host 12c7fd58e084 Pickled dag <DAG: example_bash_operator> as pickle_id: None Sending to executor. [2021-05-09 13:11:34,722] {base_executor.py:82} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'example_bash_operator', 'runme_0', '2021-05-09T13:11:31.788923+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/example_dags/example_bash_operator.py'] [2021-05-09 13:11:34,756] {local_executor.py:81} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'example_bash_operator', 'runme_0', '2021-05-09T13:11:31.788923+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/example_dags/example_bash_operator.py'] [2021-05-09 13:11:34,757] {local_executor.py:386} INFO - Shutting down LocalExecutor; waiting for running tasks to finish. Signal again if you don't want to wait. [2021-05-09 13:11:34,817] {dagbag.py:487} INFO - Filling up the DagBag from /opt/airflow/airflow/example_dags/example_bash_operator.py Running <TaskInstance: example_bash_operator.runme_0 2021-05-09T13:11:31.788923+00:00 [None]> on host 12c7fd58e084 ``` **What you expected to happen**: Pickled_id should get generated ``` Pickled dag <DAG: example_bash_operator> as pickle_id: None ``` **How to reproduce it**: run below command from command line in airflow environment ``` airflow tasks run example_bash_operator runme_0 now --ship-dag --interactive ``` **Would like to submit PR for this issue**: YES
https://github.com/apache/airflow/issues/15748
https://github.com/apache/airflow/pull/15890
d181604739c048c6969d8997dbaf8b159607904b
86d0a96bf796fd767cf50a7224be060efa402d94
"2021-05-09T13:16:14Z"
python
"2021-06-24T17:27:20Z"
closed
apache/airflow
https://github.com/apache/airflow
15,742
["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"]
Save pod name in xcom for KubernetesPodOperator.
Hello. Kubernetes generates a unique pod name. https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/kubernetes/pod_generator.py#L434-L458 It would be great if the pod name was available in Airflow after completing the task, so that, for example, we could use it to add [extra links](http://airflow.apache.org/docs/apache-airflow/stable/howto/define_extra_link.html) or use it as an argument in downstream tasks. To do this, we should save this name in XCOM table. The operator for BigQuery works in a similar way. https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/providers/google/cloud/operators/bigquery.py#L730 Thanks to this, we have links to the BigQuery console. https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/providers/google/cloud/operators/bigquery.py#L600-L605 https://github.com/apache/airflow/blob/736a62f824d9062b52983633528e58c445d8cc56/airflow/providers/google/cloud/operators/bigquery.py#L57-L86 Best regards, Kamil Breguła
https://github.com/apache/airflow/issues/15742
https://github.com/apache/airflow/pull/15755
c493b4d254157f189493acbf5101167f753aa766
37d549bde79cd560d24748ebe7f94730115c0e88
"2021-05-08T19:16:42Z"
python
"2021-05-14T00:19:37Z"
closed
apache/airflow
https://github.com/apache/airflow
15,713
["Dockerfile", "Dockerfile.ci"]
Migrate to newer Node
We started to receive deprecation warnings (and artificial 20 second delay) while compiling assets for Airflow in master/ I think maybe it's the right time to migrate to newer Node. The old UI will still be there for quite a while. This however, I think, requires rather heavy testing of the whole UI functionality. Happy to collaborate on this one but possibly we should do it as part of bigger release? @ryanahamilton @jhtimmins @mik-laj - WDYT? How heavy/dangerous this might be? ``` ================================================================================ ================================================================================ DEPRECATION WARNING Node.js 10.x is no longer actively supported! You will not receive security or critical stability updates for this version. You should migrate to a supported version of Node.js as soon as possible. Use the installation script that corresponds to the version of Node.js you wish to install. e.g. * https://deb.nodesource.com/setup_12.x — Node.js 12 LTS "Erbium" * https://deb.nodesource.com/setup_14.x — Node.js 14 LTS "Fermium" (recommended) * https://deb.nodesource.com/setup_15.x — Node.js 15 "Fifteen" * https://deb.nodesource.com/setup_16.x — Node.js 16 "Gallium" Please see https://github.com/nodejs/Release for details about which version may be appropriate for you. The NodeSource Node.js distributions repository contains information both about supported versions of Node.js and supported Linux distributions. To learn more about usage, see the repository: https://github.com/nodesource/distributions ================================================================================ ================================================================================ ```
https://github.com/apache/airflow/issues/15713
https://github.com/apache/airflow/pull/15718
87e440ddd07935f643b93b6f2bbdb3b5e8500510
46d62782e85ff54dd9dc96e1071d794309497983
"2021-05-07T13:10:31Z"
python
"2021-05-07T16:46:31Z"
closed
apache/airflow
https://github.com/apache/airflow
15,708
["airflow/decorators/task_group.py", "tests/utils/test_task_group.py"]
@task_group returns int, but it appears in @task as TaskGroup
**Apache Airflow version** 13faa6912f7cd927737a1dc15630d3bbaf2f5d4d **Environment** - **Configuration**: Local Executor - **OS** (e.g. from /etc/os-release): Mac OS 11.3 - **Kernel**: Darwin Kernel Version 20.4.0 - **Install tools**: `pip install -e .` **The DAG** ```python @task def one(): return 1 @task_group def trivial_group(inval): @task def add_one(i): return i + 1 outval = add_one(inval) return outval @task def print_it(inval): print(inval) @dag(schedule_interval=None, start_date=days_ago(1), default_args={"owner": "airflow"}) def wrap(): x = one() y = trivial_group(x) z = print_it(y) wrap_dag = wrap() ``` **What happened**: `print_it` had no predecessors and receives `<airflow.utils.task_group.TaskGroup object at 0x128921940>` **What you expected to happen**: `print_it` comes after `trivial_group.add_one` and receives `2` The caller ends up with the task group itself, equivalent in the traditional api to `tg_ref` in: ``` with TaskGroup("trivial_group") tg_ref: pass ```` This interrupts the ability to continue using the Task Flow api because passing it into a function annotated with `@task` fails to register the dependency with whatever magic gets it out of xcom and adds edges to the dag. **To Replicate** ``` $ airflow dags test wrap $(date "+%Y-%m-%d") ```
https://github.com/apache/airflow/issues/15708
https://github.com/apache/airflow/pull/15779
c8ef3a3539f17b39d0a41d10a631d8d9ee564fde
303c89fea0a7cf8a857436182abe1b042d473022
"2021-05-06T22:06:31Z"
python
"2021-05-11T19:09:58Z"
closed
apache/airflow
https://github.com/apache/airflow
15,698
["airflow/models/dagrun.py", "tests/models/test_dagrun.py"]
task_instance_mutation_hook not called by scheduler when importing airflow.models.taskinstance
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 1.10.12 (also tested with 1.10.15, 2.0.2 but less extensively) **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): None **Environment**: Linux / Docker - **Cloud provider or hardware configuration**: None - **OS** (e.g. from /etc/os-release): Red Hat Enterprise Linux Server 7.9 (Maipo) - **Kernel** (e.g. `uname -a`): Linux d7b9410c0f25 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux - **Install tools**: - **Others**: Tested on both real Linux (Red Hat) and a docker inside a windows machine. **What happened**: Custom `task_instance_mutation_hook` not called by scheduler even though airflow loads the `airflow_local_settings` module. **What you expected to happen**: `task_instance_mutation_hook` called before every task instance run. I think the way `airflow.models.dagrun` loads `task_instance_mutation_hook` from `airflow_local_settings` does not work when `airflow_local_settings` imports `airflow.models.taskinstance` or `airflow.models.dagrun`. **How to reproduce it**: 1. Added `airflow_local_settings.py` to \{AIRFLOW_HOME\}\config ```python import logging from airflow.models.taskinstance import TaskInstance def task_instance_mutation_hook(ti: TaskInstance): logging.getLogger("").warning("HERE IN task_instance_mutation_hook log") print("HERE IN task_instance_mutation_hook") ti.queue = "X" ``` 2. See output `[2021-05-06 11:13:04,076] {settings.py:392} INFO - Loaded airflow_local_settings from /usr/local/airflow/config/airflow_local_settings.py.` 3. function is never called - log/print is not written and queue does not update. 4. Additionally, example code to reproduce code issue: ```python import airflow import airflow.models.dagrun import inspect print(inspect.getfile(airflow.settings.task_instance_mutation_hook)) print(inspect.getfile(airflow.models.dagrun.task_instance_mutation_hook)) ``` outputs ``` /usr/local/airflow/config/airflow_local_settings.py /opt/bb/lib/python3.7/site-packages/airflow/settings.py ``` 5. when removing `from airflow.models.taskinstance import TaskInstance` from airflow_local_settings.py everything works as expected. **Anything else we need to know**: BTW, do the logs printed from `task_instance_mutation_hook` go anywhere? Even after I remove the import and the queue is update, I can't see anything in the logs files or in the scheduler console.
https://github.com/apache/airflow/issues/15698
https://github.com/apache/airflow/pull/15851
6b46af19acc5b561c1c5631a753cc07b1eca34f6
3919ee6eb9042562b6cafae7c34e476fbb413e13
"2021-05-06T12:45:58Z"
python
"2021-05-15T09:11:52Z"
closed
apache/airflow
https://github.com/apache/airflow
15,656
["airflow/www/static/css/dags.css"]
Scrolling issue with new fast trigger with single DAG
**Apache Airflow version**: master **What happened**: If you have a single DAG, half of the new "fast trigger" dropdown is hidden on the dashboard and causes a scrollbar in the DAG table. **How to reproduce it**: Have a single DAG in your instance and click on the trigger button from the dashboard. ![Screen Shot 2021-05-04 at 10 17 22 AM](https://user-images.githubusercontent.com/66968678/117036116-56832400-acc2-11eb-9166-b9419c163429.png)
https://github.com/apache/airflow/issues/15656
https://github.com/apache/airflow/pull/15660
d723ba5b5cfb45ce7f578c573343e86247a2d069
a0eb747b8d73f71dcf471917e013669a660cd4dd
"2021-05-04T16:21:54Z"
python
"2021-05-05T00:28:05Z"
closed
apache/airflow
https://github.com/apache/airflow
15,650
["airflow/utils/db.py"]
Docs: check_migrations more verbose documentation
**Description** The documentation and the error message of the [check-migrations](https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#check-migrations) / `def check_migrations` could be more verbose. This check can fail if the underlying database was never initialised. **Use case / motivation** We are deploying our airflow helm chart with terraform helm provider. This provider has a [bug](https://github.com/hashicorp/terraform-provider-helm/issues/683) with helm hook. If airflow would give us a bit more verbose error message why could the `check-migrations` fail, we would find the underlying error/bug much sooner. **Are you willing to submit a PR?** Yes
https://github.com/apache/airflow/issues/15650
https://github.com/apache/airflow/pull/15662
e47f7e42b632ad78a204531e385ec09bcce10816
86ad628158eb728e56c817eea2bea4ddcaa571c2
"2021-05-04T09:51:15Z"
python
"2021-05-05T05:30:11Z"
closed
apache/airflow
https://github.com/apache/airflow
15,641
["airflow/models/dag.py", "docs/apache-airflow/concepts/dags.rst", "docs/apache-airflow/concepts/tasks.rst"]
Add documentation on what each parameter to a `sla_miss_callback` callable is
I couldn't find any official documentation specifying what each parameter to a `sla_miss_callback` callable are. This would be a great addition to know how to properly format the messages sent.
https://github.com/apache/airflow/issues/15641
https://github.com/apache/airflow/pull/18305
2b62a75a34d44ac7d9ed83c02421ff4867875577
dcfa14d60dade3fdefa001d10013466fe4d77f0d
"2021-05-03T21:18:17Z"
python
"2021-09-18T19:18:32Z"