status
stringclasses
1 value
repo_name
stringclasses
13 values
repo_url
stringclasses
13 values
issue_id
int64
1
104k
updated_files
stringlengths
11
1.76k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
38
55
pull_url
stringlengths
38
53
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
closed
apache/airflow
https://github.com/apache/airflow
15,622
["airflow/providers/google/CHANGELOG.rst", "airflow/providers/google/cloud/example_dags/example_dataproc.py", "airflow/providers/google/cloud/hooks/dataproc.py", "airflow/providers/google/cloud/operators/dataproc.py", "airflow/providers/google/cloud/sensors/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py", "tests/providers/google/cloud/sensors/test_dataproc.py"]
Inconsistencies with Dataproc Operator parameters
I'm looking at the GCP Dataproc operator and noticed that the `DataprocCreateClusterOperator` and `DataprocDeleteClusterOperator` require a `region` parameter, but other operators, like the `DataprocSubmitJobOperator` require a `location` parameter instead. I think it would be best to consistently enforce the parameter as `region`, because that's what is required in the protos for all of the [cluster CRUD operations](https://github.com/googleapis/python-dataproc/blob/master/google/cloud/dataproc_v1/proto/clusters.proto) and for [job submission](https://github.com/googleapis/python-dataproc/blob/d4b299216ad833f68ad63866dbdb2c8f2755c6b4/google/cloud/dataproc_v1/proto/jobs.proto#L741). `location` also feels too ambiguous imho, because it implies we could also pass a GCE zone, which in this case is either unnecessary or not supported (I can't remember which and it's too late in my Friday for me to double check 🙃 ) This might be similar to #13454 but I'm not 100% sure. If we think this is worth working on, I could maybe take this on as a PR, but it would be low priority for me, and if someone else wants to take it on, they should feel free to 😄 IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.1 **Environment**: running locally on a Mac with SQLite backend only running a unit test that makes sure the dag is compiled. Installed with Pip and the provided constraints file. **What happened**: if you pass `location` in the `DataprocCreateClusterOperator` the DAG won't compile and will throw an error `airflow.exceptions.AirflowException: Argument ['region']` As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. --->
https://github.com/apache/airflow/issues/15622
https://github.com/apache/airflow/pull/16034
5a5f30f9133a6c5f0c41886ff9ae80ea53c73989
b0f7f91fe29d1314b71c76de0f11d2dbe81c5c4a
"2021-04-30T23:46:34Z"
python
"2021-07-07T20:37:32Z"
closed
apache/airflow
https://github.com/apache/airflow
15,598
["airflow/providers/qubole/CHANGELOG.rst", "airflow/providers/qubole/hooks/qubole.py", "airflow/providers/qubole/hooks/qubole_check.py", "airflow/providers/qubole/operators/qubole.py", "airflow/providers/qubole/provider.yaml"]
Qubole Hook Does Not Support 'include_headers'
**Description** Qubole Hook and Operator do not support `include_header` param for getting results with headers Add Support for `include_header` get_results(... arguments=[True]) **Use case / motivation** It's very hard to work with CSV results from db without headers. This is super important when using Qubole's databases. **Are you willing to submit a PR?** Not sure yet, I can give it a try **Related Issues**
https://github.com/apache/airflow/issues/15598
https://github.com/apache/airflow/pull/15615
edbc89c64033517fd6ff156067bc572811bfe3ac
47a5539f7b83826b85b189b58b1641798d637369
"2021-04-29T21:01:34Z"
python
"2021-05-04T06:39:27Z"
closed
apache/airflow
https://github.com/apache/airflow
15,596
["airflow/dag_processing/manager.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst", "newsfragments/30076.significant.rst", "tests/dag_processing/test_manager.py"]
Using SLAs causes DagFileProcessorManager timeouts and prevents deleted dags from being recreated
**Apache Airflow version**: 2.0.1 and 2.0.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A **Environment**: Celery executors, Redis + Postgres - **Cloud provider or hardware configuration**: Running inside docker - **OS** (e.g. from /etc/os-release): Centos (inside Docker) **What happens**: In 2.0.0 if you delete a dag from the GUI when the `.py` file is still present, the dag is re-added within a few seconds (albeit with no history etc. etc.). Upon attempting to upgrade to 2.0.1 we found that after deleting a dag it would take tens of minutes to come back (or more!), and its reappearance was seemingly at random (i.e. restarting schedulers / guis did not help). It did not seem to matter which dag it was. The problem still exists in 2.0.2. **What you expected to happen**: Deleting a dag should result in that dag being re-added in short order if the `.py` file is still present. **Likely cause** I've tracked it back to an issue with SLA callbacks. I strongly suspect the fix for Issue #14050 was inadvertently responsible, since that was in the 2.0.1 release. In a nutshell, it appears the dag_processor_manager gets into a state where on every single pass it takes so long to process SLA checks for one of the dag files that the entire processor times out and is killed. As a result, some of the dag files (that are queued behind the poison pill file) never get processed and thus we don't reinstate the deleted dag unless the system gets quiet and the SLA checks clear down. To reproduce in _my_ setup, I created a clean airflow instance. The only materially important config setting I use is `AIRFLOW__SCHEDULER__PARSING_PROCESSES=1` which helps keep things deterministic. I then started adding in dag files from the production system until I found a file that caused the problem. Most of our dags do not have SLAs, but this one did. After adding it, I started seeing lines like this in `dag_processor_manager.log` (file names have been changed to keep things simple) ``` [2021-04-29 16:27:19,259] {dag_processing.py:1129} ERROR - Processor for /home/airflow/dags/problematic.py with PID 309 started at 2021-04-29T16:24:19.073027+00:00 has timed out, killing it. ``` Additionally, the stats contained lines like: ``` File Path PID Runtime # DAGs # Errors Last Runtime Last Run ----------------------------------------------------------------- ----- --------- -------- ---------- -------------- ------------------- /home/airflow/dags/problematic.py 309 167.29s 8 0 158.78s 2021-04-29T16:24:19 ``` (i.e. 3 minutes to process a single file!) Of note, the parse time of the affected file got longer on each pass until the processor was killed. Increasing `AIRFLOW__CORE__DAG_FILE_PROCESSOR_TIMEOUT` to e.g. 300 did nothing to help; it simply bought a few more iterations of the parse loop before it blew up. Browsing the log file for `scheduler/2021-04-29/problematic.py.log` I could see the following: <details><summary>Log file entries in 2.0.2</summary> ``` [2021-04-29 16:06:44,633] {scheduler_job.py:629} INFO - Processing file /home/airflow/dags/problematic.py for tasks to queue [2021-04-29 16:06:44,634] {logging_mixin.py:104} INFO - [2021-04-29 16:06:44,634] {dagbag.py:451} INFO - Filling up the DagBag from /home/airflow/dags/problematic [2021-04-29 16:06:45,001] {scheduler_job.py:639} INFO - DAG(s) dict_keys(['PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-S2-weekends', 'PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-S2-weekends', 'PARQUET-BASIC-DATA-PIPELINE-TODAY-S3-weekends', 'PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-S3-weekends']) retrieved from /home/airflow/dags/problematic.py [2021-04-29 16:06:45,001] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends [2021-04-29 16:06:46,398] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends [2021-04-29 16:06:47,615] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends [2021-04-29 16:06:48,852] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends [2021-04-29 16:06:49,411] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends [2021-04-29 16:06:50,156] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends [2021-04-29 16:06:50,845] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-SP500_Index_1-weekends [2021-04-29 16:06:52,164] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends [2021-04-29 16:06:53,474] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-YESTERDAY-APPEND-S1-weekends [2021-04-29 16:06:54,731] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-SP500_Index_1-weekends [2021-04-29 16:06:55,345] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends [2021-04-29 16:06:55,920] {scheduler_job.py:396} INFO - Running SLA Checks for PARQUET-BASIC-DATA-PIPELINE-TODAY-APPEND-S1-weekends and so on for 100+ more lines like this... ``` </details> Two important points: from the above logs: 1. We seem to be running checks on the same dags multiple times 2. The number of checks grows on each pass (i.e. the number of log lines beginning "Running SLA Checks..." increases on each pass until the processor manager is restarted, and then it begins afresh) **Likely location of the problem**: This is where I start to run out of steam. I believe the culprit is this line: https://github.com/apache/airflow/blob/2.0.2/airflow/jobs/scheduler_job.py#L1813 It seems to me that the above leads to a feedback where each time you send a dag callback to the processor you include a free SLA callback as well, hence the steadily growing SLA processing log messages / behaviour I observed. As noted above, this method call _was_ in 2.0.0 but until Issue #14050 was fixed, the SLAs were ignored, so the problem only kicked in from 2.0.1 onwards. Unfortunately, my airflow-fu is not good enough for me to suggest a fix beyond the Gordian solution of removing the line completely (!); in particular, it's not clear to me how / where SLAs _should_ be being checked. Should the dag_processor_manager be doing them? Should it be another component (I mean, naively, I would have thought it should be the workers, so that SLA checks can scale with the rest of your system)? How should the checks be enqueued? I dunno enough to give a good answer. 🤷 **How to reproduce it**: In our production system, it would blow up every time, immediately. _Reliably_ reproducing in a clean system depends on how fast your test system is; the trick appears to be getting the scan of the dag file to take long enough that the SLA checks start to snowball. The dag below did it for me; if your machine seems to be staying on top of processing the dags, try increasing the number of tasks in a single dag (or buy a slower computer!) <details><summary>Simple dag that causes the problem</summary> ``` import datetime as dt import pendulum from airflow import DAG from airflow.operators.bash import BashOperator def create_graph(dag): prev_task = None for i in range(10): next_task = BashOperator( task_id=f'simple_task_{i}', bash_command="echo SLA issue", dag=dag) if prev_task: prev_task >> next_task prev_task = next_task def create_dag(name: str) -> DAG: tz_to_use = pendulum.timezone('UTC') default_args = { 'owner': '[email protected]', 'start_date': dt.datetime(2018, 11, 13, tzinfo=tz_to_use), 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'sla': dt.timedelta(hours=13), } dag = DAG(name, catchup=False, default_args=default_args, max_active_runs=10, schedule_interval="* * * * *") create_graph(dag) return dag for i in range(100): name = f"sla_dag_{i}" globals()[name] = create_dag(name) ``` </details> To reproduce: 1. Configure an empty airflow instance, s.t. it only has one parsing process (as per config above). 2. Add the file above into the install. The file simply creates 100 near-trivial dags. On my system, airflow can't stay ahead, and is basically permanently busy processing the backlog. Your cpu may have more hamsters, in which case you'll need to up the number of tasks and/or dags. 2. Locate and tail the `scheduler/[date]/sla_example.py.log` file (assuming you called the above `sla_example.py`, of course) 3. This is the non-deterministic part. On my system, within a few minutes, the processor manager is taking noticeably longer to process the file and you should be able to see lots of SLA log messages like my example above ☝️. Like all good exponential growth it takes many iterations to go from 1 second to 1.5 seconds to 2 seconds, but not very long at all to go from 10 seconds to 30 to 💥 **Anything else we need to know**: 1. I'm working around this for now by simply removing the SLAs from the dag. This solves the problem since the SLA callbacks are then dropped. But SLAs are a great feature, and I'd like them back (please!). 2. Thanks for making airflow and thanks for making it this far down the report!
https://github.com/apache/airflow/issues/15596
https://github.com/apache/airflow/pull/30076
851fde06dc66a9f8e852f9a763746a47c47e1bb7
53ed5620a45d454ab95df886a713a5e28933f8c2
"2021-04-29T20:21:20Z"
python
"2023-03-16T21:51:23Z"
closed
apache/airflow
https://github.com/apache/airflow
15,559
["airflow/settings.py", "tests/core/test_sqlalchemy_config.py", "tests/www/test_app.py"]
airflow dag success , but tasks not yet started,not scheduled.
hi,team: my dag is 1 minute schedule,one parts dag state is success,but tasks state is not yet started in a dag: ![image](https://user-images.githubusercontent.com/41068725/116344664-357d8780-a819-11eb-8010-c746ffdfdbcc.png) how can to fix it?
https://github.com/apache/airflow/issues/15559
https://github.com/apache/airflow/pull/15714
507bca57b9fb40c36117e622de3b1313c45b41c3
231d104e37da57aa097e5f726fe6d3031ad04c52
"2021-04-28T03:58:29Z"
python
"2021-05-09T08:45:16Z"
closed
apache/airflow
https://github.com/apache/airflow
15,538
["airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/sensors/s3_key.py", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"]
S3KeySensor wildcard fails to match valid unix wildcards
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 1.10.12 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: AWS MWAA - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: In a DAG, we implemented an S3KeySensor with a wildcard. This was meant to match an S3 object whose name could include a variable digit. Using an asterisk in our name, we could detect the object, but when instead we used [0-9] we could not. <!-- (please include exact error messages if you can) --> **What you expected to happen**: S3KeySensor bucket_key should be interpretable as any valid Unix wildcard pattern, probably as defined here: https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm or something similar. <!-- What do you think went wrong? --> I looked into the source code and have tracked this to the `get_wildcard_key` function in `S3_hook`: https://airflow.apache.org/docs/apache-airflow/1.10.14/_modules/airflow/hooks/S3_hook.html. This function works by iterating over many objects in the S3 bucket and checking if any matches the wildcard. This checking is done with `fnmatch` that does support ranges. The problem seems to be in a performance optimization. Instead of looping over all objects, which could be expensive in many cases, the code tries to select a Prefix for which all files that could meet the wildcard would have this prefix. This prefix is generated by splitting on the first character usage of `*` in the `wildcard_key`. That is the issue. It only splits on `*`, which means if `foo_[0-9].txt` is passed in as the `wildcard_key`, the prefix will still be evaluated as `foo_[0-9].txt` and only objects that begin with that string will be listed. This would not catch an object named `foo_0`. I believe the right fix to this would either be: 1. Drop the performance optimization of Prefix and list all objects in the bucket 2. Make sure to split on any special character when generating the prefix so that the prefix is accurate **How to reproduce it**: This should be reproduceable with any DAG in S3, by placing a file that should meet set off a wildcard sensor where the wild-card includes a range. For example, the file `foo_1.txt` in bucket `my_bucket`, with an S3KeySensor where bucket_name='my_bucket', bucket_key='foo_[0-9].txt`, and wildcard_match=True <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: This problem will occur every time <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/15538
https://github.com/apache/airflow/pull/18211
2f88009bbf8818f3b4b553a04ae3b848af43c4aa
12133861ecefd28f1d569cf2d190c2f26f6fd2fb
"2021-04-26T20:30:10Z"
python
"2021-10-01T17:36:03Z"
closed
apache/airflow
https://github.com/apache/airflow
15,536
["airflow/providers/apache/beam/hooks/beam.py", "tests/providers/apache/beam/hooks/test_beam.py", "tests/providers/google/cloud/hooks/test_dataflow.py"]
Get rid of state in Apache Beam provider hook
As discussed in https://github.com/apache/airflow/pull/15534#discussion_r620500075, we could possibly rewrite Beam Hook to remove the need of storing state in it.
https://github.com/apache/airflow/issues/15536
https://github.com/apache/airflow/pull/29503
46d45e09cb5607ae583929f3eba1923a64631f48
7ba27e78812b890f0c7642d78a986fe325ff61c4
"2021-04-26T17:29:42Z"
python
"2023-02-17T14:19:11Z"
closed
apache/airflow
https://github.com/apache/airflow
15,532
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"]
Airflow 1.10.15 : The CSRF session token is missing when i try to trigger a new dag
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 1.10.15 https://raw.githubusercontent.com/apache/airflow/constraints-1.10.15/constraints-3.6.txt **Kubernetes version**: Client Version: v1.16.2 Server Version: v1.14.8-docker-1 **Environment**: python 3.6.8 + celeryExecutor + rbac set to false - **OS** (e.g. from /etc/os-release): CentOS Linux 7 (Core) - **Kernel** (e.g. `uname -a`): 3.10.0-1127.19.1.el7.x86_64 **What happened**: I have upgraded from 1.10.12 to 1.10.15, when i trigger a dag i have the exception below ![image](https://user-images.githubusercontent.com/31507537/116103616-0b5c8600-a6b0-11eb-9255-c193a705c772.png) <!-- (please include exact error messages if you can) --> **What you expected to happen**: trigger a dag without exceptions <!-- What do you think went wrong? --> **How to reproduce it**: use airflow 1.10.15 and try to trigger an example dag example_bash_operator <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: How often does this problem occur: Every time i trigger a dag Any relevant logs to include? Put them here in side a detail tag: <details><summary>webserver.log</summary> [2021-04-26 15:03:06,611] {__init__.py:50} INFO - Using executor CeleryExecutor [2021-04-26 15:03:06,612] {dagbag.py:417} INFO - Filling up the DagBag from /home/airflow/dags 175.62.58.93 - - [26/Apr/2021:15:03:10 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" 175.62.58.93 - - [26/Apr/2021:15:03:11 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" 175.62.58.93 - - [26/Apr/2021:15:03:15 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" 175.62.58.93 - - [26/Apr/2021:15:03:16 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" [2021-04-26 15:03:17,401] {csrf.py:258} INFO - The CSRF session token is missing. 10.30.180.137 - - [26/Apr/2021:15:03:17 +0000] "POST /admin/airflow/trigger?dag_id=example_bash_operator&origin=https://xxxxxx/admin/ HTTP/1.1" 400 150 "https://xxxxxxxxxxxx/admin/airflow/trigger?dag_id=example_bash_operator&origin=https://xxxxxxxxxxxx/admin/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36" 175.62.58.93 - - [26/Apr/2021:15:03:20 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" 175.62.58.93 - - [26/Apr/2021:15:03:21 +0000] "GET /health HTTP/1.1" 200 187 "-" "kube-probe/1.14+" </details>
https://github.com/apache/airflow/issues/15532
https://github.com/apache/airflow/pull/15546
5b2fe0e74013cd08d1f76f5c115f2c8f990ff9bc
dfaaf49135760cddb1a1f79399c7b08905833c21
"2021-04-26T15:09:02Z"
python
"2021-04-27T21:20:02Z"
closed
apache/airflow
https://github.com/apache/airflow
15,526
["tests/kubernetes/kube_config", "tests/kubernetes/test_refresh_config.py"]
Improve test coverage of Kubernetes config_refresh
Kuberentes refresh_config has untested method https://codecov.io/gh/apache/airflow/src/master/airflow/kubernetes/refresh_config.py 75% We might want to improve that.
https://github.com/apache/airflow/issues/15526
https://github.com/apache/airflow/pull/18563
73fcbb0e4e151c9965fd69ba08de59462bbbe6dc
a6be59726004001214bd4d7e284fd1748425fa98
"2021-04-26T07:33:28Z"
python
"2021-10-13T23:30:28Z"
closed
apache/airflow
https://github.com/apache/airflow
15,524
["tests/cli/commands/test_task_command.py"]
Improve test coverage of task_command
Task command has a few missing commands not tested: https://codecov.io/gh/apache/airflow/src/master/airflow/cli/commands/task_command.py (77%)
https://github.com/apache/airflow/issues/15524
https://github.com/apache/airflow/pull/15760
37d549bde79cd560d24748ebe7f94730115c0e88
51e54cb530995edbb6f439294888a79724365647
"2021-04-26T07:29:42Z"
python
"2021-05-14T04:34:15Z"
closed
apache/airflow
https://github.com/apache/airflow
15,523
["tests/executors/test_kubernetes_executor.py"]
Improve test coverage of Kubernetes Executor
The Kubernetes executor has surprisingly low test coverage: 64% https://codecov.io/gh/apache/airflow/src/master/airflow/executors/kubernetes_executor.py - looks like some of the "flush/end" code is not tested. We might want to improve it.
https://github.com/apache/airflow/issues/15523
https://github.com/apache/airflow/pull/15617
cf583b9290b3c2c58893f03b12d3711cc6c6a73c
dd56875066486f8c7043fbc51f272933fa634a25
"2021-04-26T07:28:03Z"
python
"2021-05-04T21:08:21Z"
closed
apache/airflow
https://github.com/apache/airflow
15,483
["airflow/providers/apache/beam/operators/beam.py", "tests/providers/apache/beam/operators/test_beam.py"]
Dataflow operator checks wrong project_id
**Apache Airflow version**: composer-1.16.1-airflow-1.10.15 **Environment**: - **Cloud provider or hardware configuration**: Google Composer **What happened**: First, a bit of context. We have a single instance of airflow within its own GCP project, which runs dataflows jobs on different GCP projects. Let's call the project which runs airflow project A, while the project where dataflow jobs are run project D. We recently upgraded from 1.10.14 to 1.10.15 (`composer-1.14.2-airflow-1.10.14` to `composer-1.16.1-airflow-1.10.15`), and noticed that jobs were running successfully from the Dataflow console, but an error was being thrown when the `wait_for_done` call was being made by airflow to check if a dataflow job had ended. The error was reporting a 403 error code on Dataflow APIs when retrieving the job state. The error was: ``` {taskinstance.py:1152} ERROR - <HttpError 403 when requesting https://dataflow.googleapis.com/v1b3/projects/<PROJECT_A>/locations/us-east1/jobs/<JOB_NAME>?alt=json returned "(9549b560fdf4d2fe): Permission 'dataflow.jobs.get' denied on project: '<PROJECT_A>". Details: "(9549b560fdf4d2fe): Permission 'dataflow.jobs.get' denied on project: '<PROJECT_A>'"> ``` **What you expected to happen**: I noticed that the 403 code was thrown when looking up the job state within project A, while I expect this lookup to happen within project D (and to consequently NOT fail, since the associated service account has the correct permissions - since it managed to launch the job). I investigated a bit, and noticed that this looks like a regression introduced when upgrading to `composer-1.16.1-airflow-1.10.15`. This version uses an image which automatically installs `apache-airflow-backport-providers-apache-beam==2021.3.13`, which backports the dataflow operator from v2. The previous version we were using was installing `apache-airflow-backport-providers-google==2020.11.23` I checked the commits and changes, and noticed that this operator was last modified in https://github.com/apache/airflow/commit/1872d8719d24f94aeb1dcba9694837070b9884ca. Relevant lines from that commit are the following: https://github.com/apache/airflow/blob/1872d8719d24f94aeb1dcba9694837070b9884ca/airflow/providers/google/cloud/operators/dataflow.py#L1147-L1162 while these are from the previous version: https://github.com/apache/airflow/blob/70bf307f3894214c523701940b89ac0b991a3a63/airflow/providers/google/cloud/operators/dataflow.py#L965-L976 https://github.com/apache/airflow/blob/70bf307f3894214c523701940b89ac0b991a3a63/airflow/providers/google/cloud/hooks/dataflow.py#L613-L644 https://github.com/apache/airflow/blob/70bf307f3894214c523701940b89ac0b991a3a63/airflow/providers/google/cloud/hooks/dataflow.py#L965-L972 In the previous version, the job was started by calling `start_python_dataflow`, which in turn would call the `_start_dataflow` method, which would then create a local `job_controller` and use it to check if the job had ended. Throughout this chain of calls, the `project_id` parameter was passed all the way from the initialization of the `DataflowCreatePythonJobOperator` to the creation of the controller which would check if the job had ended. In the latest relevant commit, this behavior was changed. The operator receives a project_id during intialization, and creates the job using the `start_python_pipeline` method, which receives the `project_id` as part of the `variables` parameter. However, the completion of the job is checked by the `dataflow_hook.wait_for_done` call. The DataFlowHook used here: * does not specify the project_id when it is initialized * does not specify the project_id as a parameter when making the call to check for completion (the `wait_for_done` call) As a result, it looks like it is using the default GCP project ID (the one which the composer is running inside) and not the one used to create the Dataflow job. This explains why we can see the job launching successfully while the operator fails. I think that specifying the `project_id` as a parameter in the `wait_for_done` call may solve the issue. **How to reproduce it**: - Instatiate a composer on a new GCP project. - Launch a simple Dataflow job on another project The Dataflow job will succeed (you can see no errors get thrown from the GCP console), but an error will be thrown in airflow logs. **Note:** I am reporting a 403 because the service account I am using which is associated to airflow does not have the correct permissions. I suspect that, even with the correct permission, you may get another error (maybe 404, since there will be no job running with that ID within the project) but I have no way to test this at the moment. **Anything else we need to know**: This problem occurs every time I launch a Dataflow job on a project where the composer isn't running.
https://github.com/apache/airflow/issues/15483
https://github.com/apache/airflow/pull/24020
56fd04016f1a8561f1c02e7f756bab8805c05876
4a5250774be8f48629294785801879277f42cc62
"2021-04-22T09:22:48Z"
python
"2022-05-30T12:17:42Z"
closed
apache/airflow
https://github.com/apache/airflow
15,463
["scripts/in_container/_in_container_utils.sh"]
Inconsistency between the setup.py and the constraints file
**Apache Airflow version**: 2.0.2 **What happened**: Airflow's 2.0.2's [constraints file](https://raw.githubusercontent.com/apache/airflow/constraints-2.0.2/constraints-3.8.txt) has used newer `oauthlib==3.1.0` and `request-oauthlib==1.3.0` than 2.0.1's [constraints file](https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.8.txt) However both 2.0.2's [setup.py](https://github.com/apache/airflow/blob/10023fdd65fa78033e7125d3d8103b63c127056e/setup.py#L282-L286) and 2.0.1's [setup.py](https://github.com/apache/airflow/blob/beb8af5ac6c438c29e2c186145115fb1334a3735/setup.py#L273) don't allow these new versions Image build with `google_auth` being an "extra" will fail if using `pip==21.0.1` **without** the `--use-deprecated=legacy-resolver` flag. Another option is to use `pip==20.2.4`. **What you expected to happen**: The package versions in `setup.py` and `constraints-3.8.txt` should be consistent with each other. <!-- What do you think went wrong? --> **How to reproduce it**: `docker build` with the following in the `Dockerfile`: ``` pip install apache-airflow[password,celery,redis,postgres,hive,jdbc,mysql,statsd,ssh,google_auth]==2.0.2 \ --constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.0.2/constraints-3.8.txt ``` image build failed with ``` ERROR: Could not find a version that satisfies the requirement oauthlib!=2.0.3,!=2.0.4,!=2.0.5,<3.0.0,>=1.1.2; extra == "google_auth" (from apache-airflow[celery,google-auth,hive,jdbc,mysql,password,postgres,redis,ssh,statsd]) ERROR: No matching distribution found for oauthlib!=2.0.3,!=2.0.4,!=2.0.5,<3.0.0,>=1.1.2; extra == "google_auth" ```
https://github.com/apache/airflow/issues/15463
https://github.com/apache/airflow/pull/15470
c5e302030de7512a07120f71f388ad1859b26ca2
5da74f668e68132144590d1f95008bacf6f8b45e
"2021-04-20T21:40:34Z"
python
"2021-04-21T12:06:22Z"
closed
apache/airflow
https://github.com/apache/airflow
15,451
["airflow/providers/google/provider.yaml", "scripts/in_container/run_install_and_test_provider_packages.sh", "tests/core/test_providers_manager.py"]
No module named 'airflow.providers.google.common.hooks.leveldb'
**Apache Airflow version**: 2.0.2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.18.18 **Environment**: Cloud provider or hardware configuration: AWS **What happened**: Updated to Airflow 2.0.2 and a new warning appeared in webserver logs: ``` WARNING - Exception when importing 'airflow.providers.google.common.hooks.leveldb.LevelDBHook' from 'apache-airflow-providers-google' package: No module named 'airflow.providers.google.common.hooks.leveldb' ``` **What you expected to happen**: No warning. **How to reproduce it**: Don't know the specific details. Have tried adding `pip install --upgrade apache-airflow-providers-google` but the error was still there. **Anything else we need to know**: I am not using LevelDB for anything in my code, as a result I don't understand from where this error is coming.
https://github.com/apache/airflow/issues/15451
https://github.com/apache/airflow/pull/15453
63bec6f634ba67ec62a77c301e390b8354e650c9
42a1ca8aab905a0eb1ffb3da30cef9c76830abff
"2021-04-20T10:44:17Z"
python
"2021-04-20T17:36:40Z"
closed
apache/airflow
https://github.com/apache/airflow
15,439
["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py"]
DAG run state not updated while DAG is paused
**Apache Airflow version**: 2.0.0 **What happened**: The state of a DAG run does not update while the DAG is paused. The _tasks_ continue to run if the DAG run was kicked off before the DAG was paused and eventually finish and are marked correctly. The DAG run state does not get updated and stays in Running state until the DAG is unpaused. Screenshot: ![Screen Shot 2021-04-19 at 2 18 56 PM](https://user-images.githubusercontent.com/10891729/115284288-7b9c6200-a11a-11eb-98ab-5ce86c457a17.png) **What you expected to happen**: I feel like the more intuitive behavior would be to let the DAG run continue if it is paused, and to mark the DAG run state as completed the same way the tasks currently behave. **How to reproduce it**: It can be repoduced using the example DAG in the docs: https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html You would kick off a DAG run, and then paused the DAG and see that even though the tasks finish, the DAG run is never marked as completed while the DAG is paused. I have been able to reproduce this issue 100% of time. It seems like logic to update the DAG run state simply does not execute while the DAG is paused. **Anything else we need to know**: Some background on my use case: As part of our deployment, we use the Airflow rest API to pause a DAG and then use the api to check the DAG run state and wait until all dag runs are finished. Because of this bug, any DAG run in progress when we paused the DAG will never be marked as completed.
https://github.com/apache/airflow/issues/15439
https://github.com/apache/airflow/pull/16343
d53371be10451d153625df9105234aca77d5f1d4
3834df6ade22b33addd47e3ab2165a0b282926fa
"2021-04-19T18:27:33Z"
python
"2021-06-17T23:29:00Z"
closed
apache/airflow
https://github.com/apache/airflow
15,434
["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"]
KubernetesPodOperator name randomization
`KubernetesPodOperator.name` randomization should be decoupled from the way the name is set. Currently `name` is only randomized if the `name` kwarg is used. However, one could also want name randomization when a name is set in a `pod_template_file` or `full_pod_spec`. Move the name randomization feature behind a new feature flag, defaulted to True. **Related Issues** #14167
https://github.com/apache/airflow/issues/15434
https://github.com/apache/airflow/pull/19398
ca679c014cad86976c1b2e248b099d9dc9fc99eb
854b70b9048c4bbe97abde2252b3992892a4aab0
"2021-04-19T14:15:31Z"
python
"2021-11-07T16:47:01Z"
closed
apache/airflow
https://github.com/apache/airflow
15,416
["BREEZE.rst", "scripts/in_container/configure_environment.sh"]
breeze should load local tmux configuration in 'breeze start-airflow'
**Description** Currently, when we run ` breeze start-airflow ` **breeze** doesn't load local tmux configuration file **.tmux.conf** and we get default tmux configuration inside the containers. **Use case / motivation** Breeze must load local **tmux configuration** in to the containers and developers should be able to use their local configurations. **Are you willing to submit a PR?** YES <!--- We accept contributions! --> **Related Issues** None <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/15416
https://github.com/apache/airflow/pull/15454
fdea6226742d36eea2a7e0ef7e075f7746291561
508cd394bcf8dc1bada8824d52ebff7bb6c86b3b
"2021-04-17T14:34:32Z"
python
"2021-04-21T16:46:02Z"
closed
apache/airflow
https://github.com/apache/airflow
15,399
["airflow/models/pool.py", "tests/models/test_pool.py"]
Not scheduling since there are (negative number) open slots in pool
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.16 **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: Airflow fails to schedule any tasks after some time. The "Task Instance Details" tab of some of the failed tasks show the following: ``` ('Not scheduling since there are %s open slots in pool %s and require %s pool slots', -3, 'transformation', 3) ``` Admin > Pools tab shows 0 Running Slots but 9 Queued Slots. Gets stuck in this state until airflow is restarted. **What you expected to happen**: Number of "open slots in pool" should never be negative! **How to reproduce it**: - Create/configure a pool with a small size (eg. 6) - DAG with multiple tasks occupying multiple pool_slots (eg. pool_slots=3) **Anything else we need to know**:
https://github.com/apache/airflow/issues/15399
https://github.com/apache/airflow/pull/15426
8711f90ab820ed420ef317b931e933a2062c891f
d7c27b85055010377b6f971c3c604ce9821d6f46
"2021-04-16T05:14:41Z"
python
"2021-04-19T22:14:40Z"
closed
apache/airflow
https://github.com/apache/airflow
15,384
["airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"]
Pagination doesn't work with tags filter
**Apache Airflow version**: 2.0.1 **Environment**: - **OS**: Linux Mint 19.2 - **Kernel**: 5.5.0-050500-generic #202001262030 SMP Mon Jan 27 01:33:36 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux **What happened**: Seems that pagination doesn't work. I filter DAGs by tags and get too many results to get them at one page. When I click second page I'm redirected to the first one (actually, it doesn't matter if I click second, last or any other - I'm always getting redirected to the first one). **What you expected to happen**: I expect to be redirected to the correct page when I click number on the bottom of the page. **How to reproduce it**: 1. Create a lot of DAGs with the same tag 2. Filter by tag 3. Go to the next page in the pagination bar **Implementation example**: ``` from airflow import DAG from airflow.utils.dates import days_ago for i in range(200): name = 'test_dag_' + str(i) dag = DAG( dag_id=name, schedule_interval=None, start_date=days_ago(2), tags=['example1'], ) globals()[name] = dag ```
https://github.com/apache/airflow/issues/15384
https://github.com/apache/airflow/pull/15411
cb1344b63d6650de537320460b7b0547efd2353c
f878ec6c599a089a6d7516b7a66eed693f0c9037
"2021-04-15T14:57:20Z"
python
"2021-04-16T21:34:10Z"
closed
apache/airflow
https://github.com/apache/airflow
15,374
["airflow/models/dag.py", "tests/models/test_dag.py"]
Clearing a subdag task leaves parent dag in the failed state
**Apache Airflow version**: 2.0.1 **Kubernetes version**: Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"} **What happened**: Clearing a failed subdag task with Downstream+Recursive does not automatically set the state of the parent dag to 'running' so that the downstream parent tasks can execute. The work around is to manually set the state of the parent dag to running after clearing the subdag task **What you expected to happen**: With airflow version 1.10.4 the parent dag was automatically set to 'running' for this same scenario **How to reproduce it**: - Clear a failed subdag task selecting option for Downstream+Recursive - See that all down stream tasks in the subdag as well as the parent dag have been cleared - See that the parent dag is left in 'failed' state.
https://github.com/apache/airflow/issues/15374
https://github.com/apache/airflow/pull/15562
18531f81848dbd8d8a0d25b9f26988500a27e2a7
a4211e276fce6521f0423fe94b01241a9c43a22c
"2021-04-14T21:15:44Z"
python
"2021-04-30T19:52:26Z"
closed
apache/airflow
https://github.com/apache/airflow
15,353
["docs/apache-airflow/howto/custom-view-plugin.rst", "docs/apache-airflow/howto/index.rst", "docs/spelling_wordlist.txt", "metastore_browser/hive_metastore.py"]
Some more information regarding custom view plugins would be really nice!
**Description** Some more information regarding custom view plugins would be really nice **Use case / motivation** I have created a custom view for airflow which was a little tricky since the Airflow docs are quite short and most of the information in the www is out of date. Additionally the only example cannot simply be copied and pasted. Maybe one example view would be nice or at least some more information (especially how to implement the standard Airflow layout or where to find it) Maybe some additional documentation would be nice or even a quick start guide? **Are you willing to submit a PR?** Would be a pleasure after some discussion! **Related Issues** I haven't found any related issues Some feedback would be nice since this is my first issue:)
https://github.com/apache/airflow/issues/15353
https://github.com/apache/airflow/pull/27244
1447158e690f3d63981b3d8ec065665ec91ca54e
544c93f0a4d2673c8de64d97a7a8128387899474
"2021-04-13T19:08:56Z"
python
"2022-10-31T04:33:52Z"
closed
apache/airflow
https://github.com/apache/airflow
15,332
["airflow/providers/sftp/hooks/sftp.py", "airflow/providers/sftp/sensors/sftp.py", "tests/providers/sftp/hooks/test_sftp.py", "tests/providers/sftp/sensors/test_sftp.py"]
SftpSensor w/ possibility to use RegEx or fnmatch
**Description** SmartSftpSensor with possibility to search for patterns (RegEx or UNIX fnmatch) in filenames or folders **Use case / motivation** I would like to have the possibility to use wildcards and/or regular expressions to look for certain files when using an SftpSensor. At the moment I tried to do something like this: ```python from airflow.providers.sftp.sensors.sftp import SFTPSensor from airflow.plugins_manager import AirflowPlugin from airflow.utils.decorators import apply_defaults from typing import Any import os import fnmatch class SmartSftpSensor(SFTPSensor): poke_context_fields = ('path', 'filepattern', 'sftp_conn_id', ) # <- Required fields template_fields = ['filepattern', 'path'] @apply_defaults def __init__( self, filepattern="", **kwargs: Any): super().__init__(**kwargs) self.filepath = self.path self.filepattern = filepattern def poke(self, context): full_path = self.filepath directory = os.listdir(full_path) for file in directory: if not fnmatch.fnmatch(file, self.filepattern): pass else: context['task_instance'].xcom_push(key='file_name', value=file) return True return False def is_smart_sensor_compatible(self): # <- Required result = ( not self.soft_fail and super().is_smart_sensor_compatible() ) return result class MyPlugin(AirflowPlugin): name = "my_plugin" operators = [SmartSftpSensor] ``` And I call it by doing ```python sense_file = SmartSftpSensor( task_id='sense_file', sftp_conn_id='my_sftp_connection', path=templ_remote_filepath, filepattern=filename, timeout=3 ) ``` where path is the folder containing the files and filepattern is a rendered filename with wildcards: `filename = """{{ execution_date.strftime("%y%m%d_%H00??_P??_???") }}.LV1"""`, which is rendered to e.g. `210412_1600??_P??_???.LV1` but I am still not getting the expected result, as it's not capturing anything. **Are you willing to submit a PR?** Yes! **Related Issues** I didn't find any
https://github.com/apache/airflow/issues/15332
https://github.com/apache/airflow/pull/24084
ec84ffe71cfa8246155b9b4cb10bf2167e75adcf
e656e1de55094e8369cab80b9b1669b1d1225f54
"2021-04-12T17:01:24Z"
python
"2022-06-06T12:54:27Z"
closed
apache/airflow
https://github.com/apache/airflow
15,318
["airflow/cli/cli_parser.py", "airflow/cli/commands/role_command.py", "tests/cli/commands/test_role_command.py"]
Add CLI to delete roles
Currently there is no option to delete a role from CLI which is very limiting. I think it would be good if CLI will allow to delete a role (assuming no users are assigned to the role)
https://github.com/apache/airflow/issues/15318
https://github.com/apache/airflow/pull/25854
3c806ff32d48e5b7a40b92500969a0597106d7db
799b2695bb09495fc419d3ea2a8d29ff27fc3037
"2021-04-11T06:39:05Z"
python
"2022-08-27T00:37:12Z"
closed
apache/airflow
https://github.com/apache/airflow
15,289
["setup.py"]
reporting deprecated warnings saw in breeze
when working with breeze I saw these warnings many times in the logs: ``` =============================== warnings summary =============================== tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py::TestCloudwatchTaskHandler::test_close_prevents_duplicate_calls /usr/local/lib/python3.6/site-packages/jose/backends/cryptography_backend.py:18: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes, int_to_bytes tests/always/test_example_dags.py::TestExampleDags::test_should_be_importable /usr/local/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject return f(*args, **kwds) tests/always/test_example_dags.py::TestExampleDags::test_should_be_importable /usr/local/lib/python3.6/site-packages/scrapbook/__init__.py:8: FutureWarning: 'nteract-scrapbook' package has been renamed to `scrapbook`. No new releases are going out for this old package name. warnings.warn("'nteract-scrapbook' package has been renamed to `scrapbook`. No new releases are going out for this old package name.", FutureWarning) ```
https://github.com/apache/airflow/issues/15289
https://github.com/apache/airflow/pull/15290
594d93d3b0882132615ec26770ea77ff6aac5dff
9ba467b388148f4217b263d2518e8a24407b9d5c
"2021-04-08T19:07:53Z"
python
"2021-04-09T15:28:43Z"
closed
apache/airflow
https://github.com/apache/airflow
15,280
["airflow/providers/apache/spark/hooks/spark_submit.py", "tests/providers/apache/spark/hooks/test_spark_submit.py"]
Incorrect handle stop DAG when use spark-submit in cluster mode on yarn cluster. on yarn
**Apache Airflow version**: v2.0.1 **Environment**: - **Cloud provider or hardware configuration**: - bare metal - **OS** (e.g. from /etc/os-release): - Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-65-generic x86_64) - **Kernel** (e.g. `uname -a`): - Linux 5.4.0-65-generic #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux - **Install tools**: - miniconda - **Others**: - python 3.7 - hadoop 2.9.2 - spark 2.4.7 **What happened**: I have two problems: 1. When starts DAG with spark-submit on yarn with deploy="cluster" airflow doesn't track state driver. Therefore when yarn fails, the DAG's state remains in "running mode. 2. When I manual stop DAG's job, for example, mark it as "failed", in Yarn cluster the same job is still running. This error occurs because empty environment is passed to subprocess.Popen and hadoop bin directory doesn't exist in PATH. ERROR - [Errno 2] No such file or directory: 'yarn': 'yarn' **What you expected to happen**: In the first case, move the task to the "failed" state. In the second case, stopping the task on yarn cluster. <!-- What do you think went wrong? --> **How to reproduce it**: Reproduce the first issue: Start spark_submit on yarn cluster in deploy="cluster" and master="yarn", then kill task on yarn UI. On airflow task state remains "running". Reproduce the second issue: Start spark_submit on yarn cluster in deploy="cluster" and master="yarn", then manually change this running job's state to "failed", on Hadoop cluster the same job remains in running state. **Anything else we need to know**: I propose the following changes to the to the airflow\providers\apache\hooks\SparkSubmitHook.py: 1. line 201: ```python return 'spark://' in self._connection['master'] and self._connection['deploy_mode'] == 'cluster' ``` change to ```python return ('spark://' in self._connection['master'] or self._connection['master'] == "yarn") and \ (self._connection['deploy_mode'] == 'cluster') ``` 2. line 659 ```python env = None ``` change to ``` python env = {**os.environ.copy(), **(self._env if self._env else {})} ``` Applying this patch solved the above issues.
https://github.com/apache/airflow/issues/15280
https://github.com/apache/airflow/pull/15304
9dd14aae40f4c2164ce1010cd5ee67d2317ea3ea
9015beb316a7614616c9d8c5108f5b54e1b47843
"2021-04-08T15:15:44Z"
python
"2021-04-09T23:04:14Z"
closed
apache/airflow
https://github.com/apache/airflow
15,279
["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "setup.py"]
Error on logging empty line to Cloudwatch
**Apache Airflow version**: 2.0.1 **Environment**: - **Cloud provider or hardware configuration**: AWS **What happened**: I have Airflow with Cloudwatch-based remote logging running. I also have `BashOperator` that does, for example, `rsync` with invalid parameters, for example `rsync -av test test`. The output of the `rsync` error is formatted and contains empty line. Once that empty line is logged to the Cloudwatch, i receive an error: ``` 2021-04-06 19:29:22,318] /home/airflow/.local/lib/python3.6/site-packages/watchtower/__init__.py:154 WatchtowerWarning: Failed to deliver logs: Parameter validation failed: Invalid length for parameter logEvents[5].message, value: 0, valid range: 1-inf [2021-04-06 19:29:22,320] /home/airflow/.local/lib/python3.6/site-packages/watchtower/__init__.py:158 WatchtowerWarning: Failed to deliver logs: None ``` So basically empty lines can't be submitted to the Cloudwatch and as result the whole output of the process doesn't appear in logs. **What you expected to happen**: I expect to have an output of the bash command in logs. Empty lines can be skipped or replaced with something. **How to reproduce it**: For example: run `BashOperator` with `rsync` command that fails on Airflow with Cloudwatch-based remote logging. It could be any other command that produces empty line in the output.
https://github.com/apache/airflow/issues/15279
https://github.com/apache/airflow/pull/19907
5b50d610d4f1288347392fac4a6eaaed78d1bc41
2539cb44b47d78e81a88fde51087f4cc77c924c5
"2021-04-08T15:10:23Z"
python
"2021-12-01T17:53:30Z"
closed
apache/airflow
https://github.com/apache/airflow
15,261
["airflow/www/static/js/task_instance.js", "airflow/www/templates/airflow/task_instance.html"]
Changing datetime will never show task instance logs
This is an extension of #15103 **Apache Airflow version**: 2.x.x **What happened**: Once you get to the task instance logs page, the date will successfully load at first. But if you change the time of the `execution_date` from the datetimepicker in any way the logs will be blank. The logs seem to require exact datetime match, which can be down 6 decimal points beyond a second. **What you expected to happen**: Logs should interpret a date and allow for at least only to the nearest whole second, which the UI can then handle. Although a better UX should allow flexibility beyond that too. or Remove the datetimepicker because logs would only happen at the exact point a task instance occurs. **How to reproduce it**: ![Apr-07-2021 16-56-10](https://user-images.githubusercontent.com/4600967/113933125-d00a1e00-97b9-11eb-939c-9907d8ff34ac.gif) **Anything else we need to know**: Occurs every time
https://github.com/apache/airflow/issues/15261
https://github.com/apache/airflow/pull/15284
de9567f3f5dc212cee4e83f41de75c1bbe43bfe6
56a03710a607376a01cb201ec81eb9d87d7418fe
"2021-04-07T20:52:49Z"
python
"2021-04-09T00:51:18Z"
closed
apache/airflow
https://github.com/apache/airflow
15,260
["docs/apache-airflow-providers-mysql/connections/mysql.rst"]
Documentation - MySQL Connection - example contains a typo
**What happened**: There is an extra single quote after /tmp/server-ca.pem in the example. [MySQL Connections](https://airflow.apache.org/docs/apache-airflow-providers-mysql/stable/connections/mysql.html) Example “extras” field: { "charset": "utf8", "cursor": "sscursor", "local_infile": true, "unix_socket": "/var/socket", "ssl": { "cert": "/tmp/client-cert.pem", **"ca": "/tmp/server-ca.pem'",** "key": "/tmp/client-key.pem" } }
https://github.com/apache/airflow/issues/15260
https://github.com/apache/airflow/pull/15265
c594d9cfb32bbcfe30af3f5dcb452c6053cacc95
7ab4b2707669498d7278113439a13f58bd12ea1a
"2021-04-07T20:31:20Z"
python
"2021-04-08T11:09:55Z"
closed
apache/airflow
https://github.com/apache/airflow
15,259
["chart/templates/scheduler/scheduler-deployment.yaml", "chart/tests/test_scheduler.py", "chart/values.schema.json", "chart/values.yaml"]
Scheduler livenessprobe and k8s v1.20+
Pre Kubernetes v1.20, exec livenessprobes `timeoutSeconds` wasn't functional, and defaults to 1 second. The livenessprobe for the scheduler, however, takes longer than 1 second to finish so the scheduler will have consistent livenessprobe failures when running on v1.20. > Before Kubernetes 1.20, the field timeoutSeconds was not respected for exec probes: probes continued running indefinitely, even past their configured deadline, until a result was returned. ``` ... Warning Unhealthy 23s kubelet Liveness probe failed: ``` https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes **Kubernetes version**: v1.20.2 **What happened**: Livenessprobe failures keeps restarting the scheduler due to timing out. **What you expected to happen**: Livenessprobe succeeds. **How to reproduce it**: Deploy the helm chart on v1.20+ with the default livenessprobe `timeoutSeconds` of 1 and watch the scheduler livenessprobe fail.
https://github.com/apache/airflow/issues/15259
https://github.com/apache/airflow/pull/15333
18c5b8af1020a86a82c459b8a26615ba6f1d8df6
8b56629ecd44d346e35c146779e2bb5422af1b5d
"2021-04-07T20:04:27Z"
python
"2021-04-12T22:46:59Z"
closed
apache/airflow
https://github.com/apache/airflow
15,255
["tests/jobs/test_scheduler_job.py", "tests/test_utils/asserts.py"]
[QUARANTINED] TestSchedulerJob.test_scheduler_keeps_scheduling_pool_full is flaky
For example here: https://github.com/apache/airflow/runs/2288380184?check_suite_focus=true#step:6:8759 ``` =================================== FAILURES =================================== __________ TestSchedulerJob.test_scheduler_keeps_scheduling_pool_full __________ self = <tests.jobs.test_scheduler_job.TestSchedulerJob testMethod=test_scheduler_keeps_scheduling_pool_full> def test_scheduler_keeps_scheduling_pool_full(self): """ Test task instances in a pool that isn't full keep getting scheduled even when a pool is full. """ dag_d1 = DAG(dag_id='test_scheduler_keeps_scheduling_pool_full_d1', start_date=DEFAULT_DATE) BashOperator( task_id='test_scheduler_keeps_scheduling_pool_full_t1', dag=dag_d1, owner='airflow', pool='test_scheduler_keeps_scheduling_pool_full_p1', bash_command='echo hi', ) dag_d2 = DAG(dag_id='test_scheduler_keeps_scheduling_pool_full_d2', start_date=DEFAULT_DATE) BashOperator( task_id='test_scheduler_keeps_scheduling_pool_full_t2', dag=dag_d2, owner='airflow', pool='test_scheduler_keeps_scheduling_pool_full_p2', bash_command='echo hi', ) dagbag = DagBag( dag_folder=os.path.join(settings.DAGS_FOLDER, "no_dags.py"), include_examples=False, read_dags_from_db=True, ) dagbag.bag_dag(dag=dag_d1, root_dag=dag_d1) dagbag.bag_dag(dag=dag_d2, root_dag=dag_d2) dagbag.sync_to_db() session = settings.Session() pool_p1 = Pool(pool='test_scheduler_keeps_scheduling_pool_full_p1', slots=1) pool_p2 = Pool(pool='test_scheduler_keeps_scheduling_pool_full_p2', slots=10) session.add(pool_p1) session.add(pool_p2) session.commit() dag_d1 = SerializedDAG.from_dict(SerializedDAG.to_dict(dag_d1)) scheduler = SchedulerJob(executor=self.null_exec) scheduler.processor_agent = mock.MagicMock() # Create 5 dagruns for each DAG. # To increase the chances the TIs from the "full" pool will get retrieved first, we schedule all # TIs from the first dag first. date = DEFAULT_DATE for _ in range(5): dr = dag_d1.create_dagrun( run_type=DagRunType.SCHEDULED, execution_date=date, state=State.RUNNING, ) scheduler._schedule_dag_run(dr, {}, session) date = dag_d1.following_schedule(date) date = DEFAULT_DATE for _ in range(5): dr = dag_d2.create_dagrun( run_type=DagRunType.SCHEDULED, execution_date=date, state=State.RUNNING, ) scheduler._schedule_dag_run(dr, {}, session) date = dag_d2.following_schedule(date) scheduler._executable_task_instances_to_queued(max_tis=2, session=session) task_instances_list2 = scheduler._executable_task_instances_to_queued(max_tis=2, session=session) # Make sure we get TIs from a non-full pool in the 2nd list assert len(task_instances_list2) > 0 > assert all( task_instance.pool != 'test_scheduler_keeps_scheduling_pool_full_p1' for task_instance in task_instances_list2 ) E AssertionError: assert False E + where False = all(<generator object TestSchedulerJob.test_scheduler_keeps_scheduling_pool_full.<locals>.<genexpr> at 0x7fb6ecb90c10>) ```
https://github.com/apache/airflow/issues/15255
https://github.com/apache/airflow/pull/19860
d1848bcf2460fa82cd6c1fc1e9e5f9b103d95479
9b277dbb9b77c74a9799d64e01e0b86b7c1d1542
"2021-04-07T18:12:23Z"
python
"2021-12-13T17:55:43Z"
closed
apache/airflow
https://github.com/apache/airflow
15,248
["airflow/example_dags/tutorial_taskflow_api_etl_virtualenv.py", "airflow/exceptions.py", "airflow/models/dagbag.py", "airflow/providers/papermill/example_dags/example_papermill.py", "tests/api_connexion/endpoints/test_log_endpoint.py", "tests/core/test_impersonation_tests.py", "tests/dags/test_backfill_pooled_tasks.py", "tests/dags/test_dag_with_no_tags.py", "tests/models/test_dagbag.py", "tests/www/test_views.py"]
Clear notification in UI when duplicate dag names are present
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** When using decorators to define dags, e.g. dag_1.py: ```python from airflow.decorators import dag, task from airflow.utils.dates import days_ago DEFAULT_ARGS = { "owner": "airflow", } @task def some_task(): pass @dag( default_args=DEFAULT_ARGS, schedule_interval=None, start_date=days_ago(2), ) def my_dag(): some_task() DAG_1 = my_dag() ``` and dag_2.py: ```python from airflow.decorators import dag, task from airflow.utils.dates import days_ago DEFAULT_ARGS = { "owner": "airflow", } @task def some_other_task(): pass @dag( default_args=DEFAULT_ARGS, schedule_interval=None, start_date=days_ago(2), ) def my_dag(): some_other_task() DAG_2 = my_dag() ``` We have two different dags which have been written in isolation, but by sheer bad luck both define `my_dag()`. This seems fine for each file in isolation, but on the airflow UI, we only end up seeing one entry for `my_dag`, where it has picked up `dag_1.py` and ignored `dag_2.py`. **Use case / motivation** We currently end up with only one DAG showing up on the UI, and no indication as to why the other one hasn't appeared. Suggestion: popup similar to 'DAG import error' to highlight what needs changing in one of the DAG files in order for both to show up ("DAG import error: duplicate dag names found - please review {duplicate files} and ensure all dag definitions are unique"?) **Are you willing to submit a PR?** No time to spare on this at present **Related Issues** I haven't found any related issues with the search function.
https://github.com/apache/airflow/issues/15248
https://github.com/apache/airflow/pull/15302
faa4a527440fb1a8f47bf066bb89bbff380b914d
09674537cb12f46ca53054314aea4d8eec9c2e43
"2021-04-07T10:04:57Z"
python
"2021-05-06T11:59:25Z"
closed
apache/airflow
https://github.com/apache/airflow
15,245
["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"]
Passing Custom Image Family Name to the DataprocClusterCreateOperator()
**Description** Currently, we can only pass custom Image name to **DataprocClusterCreateOperator(),** as the custom image expires after 60 days, we either need to update the image or we need to pass the expiration token. Functionality is already available in **gcloud** and **REST**. `gcloud dataproc clusters test_cluster ...... --image projects/<custom_image_project_id>/global/images/family/<family_name> ....... ` **Use case / motivation** The user should be able to pass either Custom Image or Custom Image family name, so we don't have to update the image up on expiration or use expiration token. **Are you willing to submit a PR?** Yes **Related Issues** None
https://github.com/apache/airflow/issues/15245
https://github.com/apache/airflow/pull/15250
99ec208024933d790272a09a6f20b241410a7df7
6da36bad2c5c86628284d91ad6de418bae7cd029
"2021-04-07T06:17:45Z"
python
"2021-04-18T17:26:44Z"
closed
apache/airflow
https://github.com/apache/airflow
15,218
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/executors/kubernetes_executor.py", "airflow/jobs/scheduler_job.py", "airflow/kubernetes/kube_config.py", "airflow/utils/event_scheduler.py", "tests/executors/test_kubernetes_executor.py", "tests/utils/test_event_scheduler.py"]
Task stuck in queued state with pending pod
**Apache Airflow version**: 2.0.1 **Kubernetes version**: v1.19.7 **Executor**: KubernetesExecutor **What happened**: If you have a worker that gets stuck in pending forever, say with a missing volume mount, the task will stay in the queued state forever. Nothing is applying a timeout on it actually being able to start. **What you expected to happen**: Eventually the scheduler will notice that the worker hasn't progressed past pending after a given amount of time and will mark it as a failure. **How to reproduce it**: ```python from airflow import DAG from airflow.operators.bash import BashOperator from airflow.utils.dates import days_ago from kubernetes.client import models as k8s default_args = { "owner": "airflow", } with DAG( dag_id="pending", default_args=default_args, schedule_interval=None, start_date=days_ago(2), ) as dag: BashOperator( task_id="forever_pending", bash_command="date; sleep 30; date", executor_config={ "pod_override": k8s.V1Pod( spec=k8s.V1PodSpec( containers=[ k8s.V1Container( name="base", volume_mounts=[ k8s.V1VolumeMount(mount_path="/foo/", name="vol") ], ) ], volumes=[ k8s.V1Volume( name="vol", persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource( claim_name="missing" ), ) ], ) ), }, ) ``` **Anything else we need to know**: Related to: * #15149 (This is reporting that these pending pods don't get deleted via "Mark Failed") * #14556 (This handles when these pending pods get deleted and is already fixed)
https://github.com/apache/airflow/issues/15218
https://github.com/apache/airflow/pull/15263
1e425fe6459a39d93a9ada64278c35f7cf0eab06
dd7ff4621e003421521960289a323eb1139d1d91
"2021-04-05T22:04:15Z"
python
"2021-04-20T18:24:38Z"
closed
apache/airflow
https://github.com/apache/airflow
15,179
["chart/templates/NOTES.txt"]
Kubernetes does not show logs for task instances if remote logging is not configured
Without configuring remote logging, logs from Kubernetes for task instances are not complete. Without remote logging configured, the logging for task instances are outputted as : logging_level: INFO ```log BACKEND=postgresql DB_HOST=airflow-postgresql.airflow.svc.cluster.local DB_PORT=5432 [2021-04-03 12:35:52,047] {dagbag.py:448} INFO - Filling up the DagBag from /opt/airflow/dags/k8pod.py /home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:26 DeprecationWarning: This module is deprecated. Please use `kubernetes.client.models.V1Volume`. /home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/backcompat/backwards_compat_converters.py:27 DeprecationWarning: This module is deprecated. Please use `kubernetes.client.models.V1VolumeMount`. Running <TaskInstance: k8_pod_operator_xcom.task322 2021-04-03T12:25:49.515523+00:00 [queued]> on host k8podoperatorxcomtask322.7f2ee45d4d6443c5ad26bd8fbefb8292 ``` **Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} **Environment**: - **OS** (e.g. from /etc/os-release): Ubuntu 20.04 **What happened**: The logs for task instances run are not shown without remote logging configured **What you expected to happen**: I expected to see complete logs for tasks **How to reproduce it**: Start airflow using the helm chart without configuring remote logging. Run a task and check the logs. It's necessary to set `delete_worker_pods` to False so you can view the logs after the task has ended
https://github.com/apache/airflow/issues/15179
https://github.com/apache/airflow/pull/16784
1eed6b4f37ddf2086bf06fb5c4475c68fadac0f9
8885fc1d9516b30b316487f21e37d34bdd21e40e
"2021-04-03T21:20:52Z"
python
"2021-07-06T18:37:31Z"
closed
apache/airflow
https://github.com/apache/airflow
15,178
["airflow/example_dags/tutorial.py", "airflow/models/baseoperator.py", "airflow/serialization/schema.json", "airflow/www/utils.py", "airflow/www/views.py", "docs/apache-airflow/concepts.rst", "tests/serialization/test_dag_serialization.py", "tests/www/test_utils.py"]
Task doc is not shown on Airflow 2.0 Task Instance Detail view
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 932f8c2e9360de6371031d4d71df00867a2776e6 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: locally run `airflow server` - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): mac - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: <!-- (please include exact error messages if you can) --> Task doc is shown on Airflow v1 Task Instance Detail view but not shown on v2. **What you expected to happen**: <!-- What do you think went wrong? --> Task doc is shown. **How to reproduce it**: - install airflow latest master - `airflow server` - open `tutorial_etl_dag` in `example_dags` - run dag(I don't know why but task instance detail can't open with error if no dag runs) and open task instance detail <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/15178
https://github.com/apache/airflow/pull/15191
7c17bf0d1e828b454a6b2c7245ded275b313c792
e86f5ca8fa5ff22c1e1f48addc012919034c672f
"2021-04-03T20:48:59Z"
python
"2021-04-05T02:46:41Z"
closed
apache/airflow
https://github.com/apache/airflow
15,145
["airflow/providers/google/cloud/example_dags/example_bigquery_to_mssql.py", "airflow/providers/google/cloud/transfers/bigquery_to_mssql.py", "airflow/providers/google/provider.yaml", "tests/providers/google/cloud/transfers/test_bigquery_to_mssql.py"]
Big Query to MS SQL operator
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** A new transfer operator for transferring records from Big Query to MSSQL. **Use case / motivation** Very similar to Bigquery to mysql, this will be an operator for transferring rows from Big Query to MSSQL. **Are you willing to submit a PR?** Yes **Related Issues** No
https://github.com/apache/airflow/issues/15145
https://github.com/apache/airflow/pull/15422
70cfe0135373d1f0400e7d9b275ebb017429794b
7f8f75eb80790d4be3167f5e1ffccc669a281d55
"2021-04-01T20:36:55Z"
python
"2021-06-12T21:07:06Z"
closed
apache/airflow
https://github.com/apache/airflow
15,113
["setup.py"]
ImportError: cannot import name '_check_google_client_version' from 'pandas_gbq.gbq'
**What happened**: `pandas-gbq` released version [0.15.0](https://github.com/pydata/pandas-gbq/releases/tag/0.15.0) which broke `apache-airflow-backport-providers-google==2020.11.23` ``` ../lib/python3.7/site-packages/airflow/providers/google/cloud/hooks/bigquery.py:49: in <module> from pandas_gbq.gbq import ( E ImportError: cannot import name '_check_google_client_version' from 'pandas_gbq.gbq' (/usr/local/lib/python3.7/site-packages/pandas_gbq/gbq.py) ``` The fix is to pin `pandas-gpq==0.14.1`.
https://github.com/apache/airflow/issues/15113
https://github.com/apache/airflow/pull/15114
64b00896d905abcf1fbae195a29b81f393319c5f
b3b412523c8029b1ffbc600952668dc233589302
"2021-03-31T14:39:00Z"
python
"2021-04-04T17:25:22Z"
closed
apache/airflow
https://github.com/apache/airflow
15,107
["Dockerfile", "chart/values.yaml", "docs/docker-stack/build-arg-ref.rst", "docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/extending/writable-directory/Dockerfile", "docs/docker-stack/entrypoint.rst", "scripts/in_container/prod/entrypoint_prod.sh"]
Make the entrypoint in Prod image fail in case the user/group is not properly set
Airflow Production image accepts two types of uid/gid setting: * airflow user (50000) with any GID * any other user wit GID = 0 (this is to accommodate OpenShift Guidelines https://docs.openshift.com/enterprise/3.0/creating_images/guidelines.html) We should check the uid/gid at entrypoint and fail it with clear error message if the uid/gid are set wrongly. <!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** <!-- A short description of your feature --> **Use case / motivation** <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> **Are you willing to submit a PR?** <!--- We accept contributions! --> **Related Issues** <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/15107
https://github.com/apache/airflow/pull/15162
1d635ef0aefe995553059ee5cf6847cf2db65b8c
ce91872eccceb8fb6277012a909ad6b529a071d2
"2021-03-31T10:30:38Z"
python
"2021-04-08T17:28:36Z"
closed
apache/airflow
https://github.com/apache/airflow
15,103
["airflow/www/static/js/task_instance.js"]
Airflow web server redirects to a non-existing log folder - v2.1.0.dev0
**Apache Airflow version**: v2.1.0.dev0 **Environment**: - **Others**: Docker + docker compose ``` docker pull apache/airflow:master-python3.8 ``` **What happened**: Once the tasks finish successfully, I click on the Logs button in the web server, then I got redirected to this URL: `http://localhost:8080/task?dag_id=testing&task_id=testing2&execution_date=2021-03-30T22%3A50%3A17.075509%2B00%3A00` Everything looks fine just for 0.5-ish seconds (the screenshots below were taken by disabling the page refreshing): ![image](https://user-images.githubusercontent.com/5461023/113067907-84ee7a80-91bd-11eb-85ba-cda86eda9125.png) ![image](https://user-images.githubusercontent.com/5461023/113067993-aea7a180-91bd-11eb-9f5f-682d111f9fa8.png) Then, it instantly gets redirected to the following URL: `http://localhost:8080/task?dag_id=testing&task_id=testing2&execution_date=2021-03-30+22%3A50%3A17%2B00%3A00# ` In which I cannot see any info: ![image](https://user-images.githubusercontent.com/5461023/113068254-3d1c2300-91be-11eb-98c8-fa578bfdfbd1.png) ![image](https://user-images.githubusercontent.com/5461023/113068278-4c02d580-91be-11eb-9785-b661d4c36463.png) The problems lies in the log format specified in the URL: ``` 2021-03-30T22%3A50%3A17.075509%2B00%3A00 2021-03-30+22%3A50%3A17%2B00%3A00# ``` This is my python code to run the DAG: ```python args = { 'owner': 'airflow', } dag = DAG( dag_id='testing', default_args=args, schedule_interval=None, start_date=datetime(2019,1,1), catchup=False, tags=['example'], ) task = PythonOperator( task_id="testing2", python_callable=test_python, depends_on_past=False, op_kwargs={'test': 'hello'}, dag=dag, ) ``` **Configuration details** Environment variables from docker-compose.yml file: ``` AIRFLOW__CORE__EXECUTOR: CeleryExecutor AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0 AIRFLOW__CORE__FERNET_KEY: '' AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true' AIRFLOW__CORE__LOAD_EXAMPLES: 'false' AIRFLOW_HOME: /opt/airflow AIRFLOW__CORE__DEFAULT_TIMEZONE: Europe/Madrid AIRFLOW__WEBSERVER__DEFAULT_UI_TIMEZONE: Europe/Madrid AIRFLOW__WEBSERVER__EXPOSE_CONFIG: 'true' ```
https://github.com/apache/airflow/issues/15103
https://github.com/apache/airflow/pull/15258
019241be0c839ba32361679ffecd178c0506d10d
523fb5c3f421129aea10045081dc5e519859c1ae
"2021-03-30T23:29:48Z"
python
"2021-04-07T20:38:30Z"
closed
apache/airflow
https://github.com/apache/airflow
15,071
["airflow/cli/cli_parser.py", "airflow/cli/commands/scheduler_command.py", "chart/templates/scheduler/scheduler-deployment.yaml", "tests/cli/commands/test_scheduler_command.py"]
Run serve_logs process as part of scheduler command
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** - The `airflow serve_logs` command has been removed from the CLI as of 2.0.0 - When using `CeleryExecutor`, the `airflow celery worker` command runs the `serve_logs` process in the background. - We should do the same with `airflow scheduler` command when using `LocalExecutor` or `SequentialExecutor` <!-- A short description of your feature --> **Use case / motivation** - This will allow for viewing task logs in the UI when using `LocalExecutor` or `SequentialExecutor` without elasticsearch configured. <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> **Are you willing to submit a PR?** Yes. Working with @dimberman . <!--- We accept contributions! --> **Related Issues** - https://github.com/apache/airflow/issues/14222 - https://github.com/apache/airflow/issues/13331 <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/15071
https://github.com/apache/airflow/pull/15557
053d903816464f699876109b50390636bf617eff
414bb20fad6c6a50c5a209f6d81f5ca3d679b083
"2021-03-29T17:46:46Z"
python
"2021-04-29T15:06:06Z"
closed
apache/airflow
https://github.com/apache/airflow
15,059
["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/user_schema.py", "tests/api_connexion/endpoints/test_user_endpoint.py", "tests/api_connexion/schemas/test_user_schema.py"]
Remove 'user_id', 'role_id' from User and Role in OpenAPI schema
Would be good to remove the 'id' of both User and Role schemas from what is dumped in REST API endpoints. ID of User and Role table are sensitive data that would be fine to hide from the endpoints
https://github.com/apache/airflow/issues/15059
https://github.com/apache/airflow/pull/15117
b62ca0ad5d8550a72257ce59c8946e7f134ed70b
7087541a56faafd7aa4b9bf9f94eb6b75eed6851
"2021-03-28T15:40:00Z"
python
"2021-04-07T13:54:45Z"
closed
apache/airflow
https://github.com/apache/airflow
15,023
["airflow/www/api/experimental/endpoints.py", "airflow/www/templates/airflow/trigger.html", "airflow/www/views.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/www/api/experimental/test_endpoints.py", "tests/www/views/test_views_trigger_dag.py"]
DAG task execution and API fails if dag_run.conf is provided with an array or string (instead of dict)
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): Tried both pip install and k8s image **Environment**: Dev Workstation of K8s execution - both the same - **OS** (e.g. from /etc/os-release): Ubuntu 20.04 LTS - **Others**: Python 3.6 **What happened**: We use Airflow 1.10.14 currently in production and have a couple of DAGs defined today which digest a batch call. We implemented the batch (currently) in a way that the jobs are provided as dag_run.conf as an array of dicts, e.g. "[ { "job": "1" }, { "job": "2" } ]". Trying to upgrade to Airflow 2.0.1 we see that such calls are still possible to submit but all further actions are failing: - It is not possible to query status via REST API, generates a HTTP 500 - DAG starts but all tasks fail. - Logs can not be displayed (actually there are none produced on the file system) - Error logging is a bit complex, Celery worker does not provide meaningful logs on console nor produces log files, running a scheduler as SequentialExecutor reveals at least one meaningful sack trace as below - (probably a couple of other internal logic is also failing - Note that the dag_run.conf can be seen as submitted (so is correctly received) in Browse--> DAG Runs menu As a regression using the same dag and passing a dag_run.conf = "{ "batch": [ { "job": "1" }, { "job": "2" } ] }" as well as "{}". Example (simple) DAG to reproduce: ``` from airflow import DAG from airflow.operators.bash import BashOperator from airflow.utils.dates import days_ago from datetime import timedelta dag = DAG( 'test1', description='My first DAG', default_args={ 'owner': 'jscheffl', 'email': ['***@***.de'], 'email_on_failure': True, 'email_on_retry': True, 'retries': 5, 'retry_delay': timedelta(minutes=5), }, start_date=days_ago(2) ) hello_world = BashOperator( task_id='hello_world', bash_command='echo hello world', dag=dag, ) ``` Stack trace from SequentialExecutor: ``` Traceback (most recent call last): File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/bin/airflow", line 8, in <module> sys.exit(main()) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main args.func(args) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command return func(*args, **kwargs) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 89, in wrapper return f(*args, **kwargs) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 225, in task_run ti.init_run_context(raw=args.raw) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1987, in init_run_context self._set_context(self) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/logging_mixin.py", line 54, in _set_context set_context(self.log, context) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/logging_mixin.py", line 174, in set_context handler.set_context(value) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 56, in set_context local_loc = self._init_file(ti) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 245, in _init_file relative_path = self._render_filename(ti, ti.try_number) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py", line 77, in _render_filename jinja_context = ti.get_template_context() File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper return func(*args, session=session, **kwargs) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1606, in get_template_context self.overwrite_params_with_dag_run_conf(params=params, dag_run=dag_run) File "/home/jscheffl/Programmieren/Python/Airflow/syncignore/airflow-venv/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1743, in overwrite_params_with_dag_run_conf params.update(dag_run.conf) ValueError: dictionary update sequence element #0 has length 4; 2 is required {sequential_executor.py:66} ERROR - Failed to execute task Command '['airflow', 'tasks', 'run', 'test1', 'hello_world', '2021-03-25T22:22:36.732899+00:00', '--local', '--pool', 'default_pool', '--subdir', '/home/jscheffl/Programmieren/Python/Airflow/airflow-home/dags/test1.py']' returned non-zero exit status 1.. [2021-03-25 23:42:47,209] {scheduler_job.py:1199} INFO - Executor reports execution of test1.hello_world execution_date=2021-03-25 22:22:36.732899+00:00 exited with status failed for try_number 5 ``` **What you expected to happen**: - EITHER the submission of arrays as dag_run.conf is supported like in 1.10.14 - OR I would expect that the submission contains a validation if array values are not supported by Airflow (which it seems it was at least working in 1.10) **How to reproduce it**: See DAG code above, reproduce the error e.g. by triggering with "[ "test" ]" as dag_run.conf **Anything else we need to know**: I assume not :-)
https://github.com/apache/airflow/issues/15023
https://github.com/apache/airflow/pull/15057
eeb97cff9c2cef46f2eb9a603ccf7e1ccf804863
01c9818405107271ee8341c72b3d2d1e48574e08
"2021-03-25T22:50:15Z"
python
"2021-06-22T12:31:37Z"
closed
apache/airflow
https://github.com/apache/airflow
15,019
["airflow/ui/src/api/index.ts", "airflow/ui/src/components/TriggerRunModal.tsx", "airflow/ui/src/interfaces/api.ts", "airflow/ui/src/utils/memo.ts", "airflow/ui/src/views/Pipelines/Row.tsx", "airflow/ui/src/views/Pipelines/index.tsx", "airflow/ui/test/Pipelines.test.tsx"]
Establish mutation patterns via the API
https://github.com/apache/airflow/issues/15019
https://github.com/apache/airflow/pull/15068
794922649982b2a6c095f7fa6be4e5d6a6d9f496
9ca49b69113bb2a1eaa0f8cec2b5f8598efc19ea
"2021-03-25T21:24:01Z"
python
"2021-03-30T00:32:11Z"
closed
apache/airflow
https://github.com/apache/airflow
15,018
["airflow/ui/package.json", "airflow/ui/src/api/index.ts", "airflow/ui/src/components/Table.tsx", "airflow/ui/src/interfaces/react-table-config.d.ts", "airflow/ui/src/views/Pipelines/PipelinesTable.tsx", "airflow/ui/src/views/Pipelines/Row.tsx", "airflow/ui/test/Pipelines.test.tsx", "airflow/ui/yarn.lock"]
Build out custom Table components
https://github.com/apache/airflow/issues/15018
https://github.com/apache/airflow/pull/15805
65519ab83ddf4bd6fc30c435b5bfccefcb14d596
2c6b003fbe619d5d736cf97f20a94a3451e1a14a
"2021-03-25T21:22:50Z"
python
"2021-05-27T20:23:02Z"
closed
apache/airflow
https://github.com/apache/airflow
15,001
["airflow/providers/amazon/aws/sensors/s3_prefix.py", "tests/providers/amazon/aws/sensors/test_s3_prefix.py"]
S3MultipleKeysSensor operator
**Description** Currently we have an operator, S3KeySensor which polls for the given prefix in the bucket. At times, there is need to poll for multiple prefixes in given bucket in one go. To have that - I propose to have a S3MultipleKeysSensor, which would poll for multiple prefixes in the given bucket in one go. **Use case / motivation** To make it easier for users to poll multiple S3 prefixes in a given bucket. **Are you willing to submit a PR?** Yes, I have an implementation ready for that. **Related Issues** NA
https://github.com/apache/airflow/issues/15001
https://github.com/apache/airflow/pull/18807
ec31b2049e7c3b9f9694913031553f2d7eb66265
176165de3b297c0ed7d2b60cf6b4c37fc7a2337f
"2021-03-25T07:24:52Z"
python
"2021-10-11T21:15:16Z"
closed
apache/airflow
https://github.com/apache/airflow
15,000
["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"]
When an ECS Task fails to start, ECS Operator raises a CloudWatch exception
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 1.10.13 **Environment**: - **Cloud provider or hardware configuration**:AWS - **OS** (e.g. from /etc/os-release): Amazon Linux 2 - **Kernel** (e.g. `uname -a`): 4.14.209-160.339.amzn2.x86_64 - **Install tools**: pip - **Others**: **What happened**: When an ECS Task exits with `stopCode: TaskFailedToStart`, the ECS Operator will exit with a ResourceNotFoundException for the GetLogEvents operation. This is because the task has failed to start, so no log is created. ``` [2021-03-14 02:32:49,792] {ecs_operator.py:147} INFO - ECS Task started: {'tasks': [{'attachments': [], 'availabilityZone': 'ap-northeast-1c', 'clusterArn': 'arn:aws:ecs:ap-northeast-1:xxxx:cluster/ecs-cluster', 'containerInstanceArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container-instance/ecs-cluster/xxxx', 'containers': [{'containerArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container/xxxx', 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'name': 'container_image', 'image': 'xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/ecr/container_image:latest', 'lastStatus': 'PENDING', 'networkInterfaces': [], 'cpu': '128', 'memoryReservation': '128'}], 'cpu': '128', 'createdAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'desiredStatus': 'RUNNING', 'group': 'family:task', 'lastStatus': 'PENDING', 'launchType': 'EC2', 'memory': '128', 'overrides': {'containerOverrides': [{'name': 'container_image', 'command': ['/bin/bash', '-c', 'xxxx']}], 'inferenceAcceleratorOverrides': []}, 'startedBy': 'airflow', 'tags': [], 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'taskDefinitionArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task-definition/task:1', 'version': 1}], 'failures': [], 'ResponseMetadata': {'RequestId': 'xxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'xxxx', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1471', 'date': 'Sun, 14 Mar 2021 02:32:48 GMT'}, 'RetryAttempts': 0}} [2021-03-14 02:34:15,022] {ecs_operator.py:168} INFO - ECS Task stopped, check status: {'tasks': [{'attachments': [], 'availabilityZone': 'ap-northeast-1c', 'clusterArn': 'arn:aws:ecs:ap-northeast-1:xxxx:cluster/ecs-cluster', 'connectivity': 'CONNECTED', 'connectivityAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'containerInstanceArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container-instance/ecs-cluster/xxxx', 'containers': [{'containerArn': 'arn:aws:ecs:ap-northeast-1:xxxx:container/xxxx', 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'name': 'container_image', 'image': 'xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/ecr/container_image:latest', 'lastStatus': 'STOPPED', 'reason': 'CannotPullContainerError: failed to register layer: Error processing tar file(exit status 1): write /var/lib/xxxx: no space left on device', 'networkInterfaces': [], 'healthStatus': 'UNKNOWN', 'cpu': '128', 'memoryReservation': '128'}], 'cpu': '128', 'createdAt': datetime.datetime(2021, 3, 14, 2, 32, 49, 770000, tzinfo=tzlocal()), 'desiredStatus': 'STOPPED', 'executionStoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 810000, tzinfo=tzlocal()), 'group': 'family:task', 'healthStatus': 'UNKNOWN', 'lastStatus': 'STOPPED', 'launchType': 'EC2', 'memory': '128', 'overrides': {'containerOverrides': [{'name': 'container_image', 'command': ['/bin/bash', '-c', 'xxxx']}], 'inferenceAcceleratorOverrides': []}, 'pullStartedAt': datetime.datetime(2021, 3, 14, 2, 32, 51, 68000, tzinfo=tzlocal()), 'pullStoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 13, 584000, tzinfo=tzlocal()), 'startedBy': 'airflow', 'stopCode': 'TaskFailedToStart', 'stoppedAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 821000, tzinfo=tzlocal()), 'stoppedReason': 'Task failed to start', 'stoppingAt': datetime.datetime(2021, 3, 14, 2, 34, 12, 821000, tzinfo=tzlocal()), 'tags': [], 'taskArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task/ecs-cluster/xxxx', 'taskDefinitionArn': 'arn:aws:ecs:ap-northeast-1:xxxx:task-definition/task:1', 'version': 2}], 'failures': [], 'ResponseMetadata': {'RequestId': 'xxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'xxxx', 'content-type': 'application/x-amz-json-1.1', 'content-length': '1988', 'date': 'Sun, 14 Mar 2021 02:34:14 GMT'}, 'RetryAttempts': 0}} [2021-03-14 02:34:15,024] {ecs_operator.py:172} INFO - ECS Task logs output: [2021-03-14 02:34:15,111] {credentials.py:1094} INFO - Found credentials in environment variables. [2021-03-14 02:34:15,416] {taskinstance.py:1150} ERROR - An error occurred (ResourceNotFoundException) when calling the GetLogEvents operation: The specified log stream does not exist. Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 984, in _run_raw_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py", line 152, in execute self._check_success_task() File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py", line 175, in _check_success_task for event in self.get_logs_hook().get_log_events(self.awslogs_group, stream_name): File "/usr/local/lib/python3.7/site-packages/airflow/contrib/hooks/aws_logs_hook.py", line 85, in get_log_events **token_arg) File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 676, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetLogEvents operation: The specified log stream does not exist. ``` <!-- (please include exact error messages if you can) --> **What you expected to happen**: ResourceNotFoundException is misleading because it feels like a problem with CloudWatchLogs. Expect AirflowException to indicate that the task has failed. <!-- What do you think went wrong? --> **How to reproduce it**: <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> This can be reproduced by running an ECS Task that fails to start, for example by specifying a non-existent entry_point. **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> --> I suspect Issue #11663 has the same problem, i.e. it's not a CloudWatch issue, but a failure to start an ECS Task.
https://github.com/apache/airflow/issues/15000
https://github.com/apache/airflow/pull/18733
a192b4afbd497fdff508b2a06ec68cd5ca97c998
767a4f5207f8fc6c3d8072fa780d84460d41fc7a
"2021-03-25T05:55:31Z"
python
"2021-10-05T21:34:26Z"
closed
apache/airflow
https://github.com/apache/airflow
14,991
["scripts/ci/libraries/_md5sum.sh", "scripts/ci/libraries/_verify_image.sh", "scripts/docker/compile_www_assets.sh"]
Static file not being loaded in web server in docker-compose
Apache Airflow version: apache/airflow:master-python3.8 Environment: Cloud provider or hardware configuration: OS (e.g. from /etc/os-release): Mac OS 10.16.5 Kernel (e.g. uname -a): Darwin Kernel Version 19.6.0 Browser: Google Chrome Version 89.0.4389.90 What happened: I am having an issue with running `apache/airflow:master-python3.8` with docker-compose. The log of the webserver says` Please make sure to build the frontend in static/ directory and restart the server` when it is running. Due to static files not being loaded, login and dags are not working. What you expected to happen: static files being loaded correctly. How to reproduce it: My docker-compose is based on the official example. https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml Anything else we need to know: It used to work until 2 days ago when the new docker image was released. Login prompt looks like this. ![Screenshot 2021-03-24 at 21 54 16](https://user-images.githubusercontent.com/28846850/112381987-90d4cb00-8ceb-11eb-9324-461c6eae7b01.png)
https://github.com/apache/airflow/issues/14991
https://github.com/apache/airflow/pull/14995
775ee51d0e58aeab5d29683dd2ff21b8c9057095
5dc634bf74bbec68bbe1c7b6944d0a9efd85181d
"2021-03-24T20:54:58Z"
python
"2021-03-25T13:04:43Z"
closed
apache/airflow
https://github.com/apache/airflow
14,989
[".github/workflows/ci.yml", "docs/exts/docs_build/fetch_inventories.py", "scripts/ci/docs/ci_docs.sh", "scripts/ci/docs/ci_docs_prepare.sh"]
Make Docs builds fallback in case external docs sources are missing
Every now and then our docs builds start to fail because of external dependency (latest example here #14985). And while we are doing caching now of that information, it does not help when the initial retrieval fails. This information does not change often but with the number of dependencies we have it will continue to fail regularly simply because many of those depenencies are not very reliable - they are just a web page hosted somewhere. They are nowhere near the stabilty of even PyPI or Apt sources and we have no mirroring in case of problem. Maybe we could a) see if we can use some kind of mirroring scheme (do those sites have mirrrors ? ) b) if not, simply write a simple script that will dump the cached content for those to S3, refresh it in the CI scheduled (nightly) master builds ad have a fallback mechanism to download that from there in case of any problems in CI?
https://github.com/apache/airflow/issues/14989
https://github.com/apache/airflow/pull/15109
2ac4638b7e93d5144dd46f2c09fb982c374db79e
8cc8d11fb87d0ad5b3b80907874f695a77533bfa
"2021-03-24T18:15:48Z"
python
"2021-04-02T22:11:44Z"
closed
apache/airflow
https://github.com/apache/airflow
14,959
["airflow/providers/docker/operators/docker_swarm.py", "tests/providers/docker/operators/test_docker_swarm.py"]
Support all terminus task states for Docker Swarm Operator
**Apache Airflow version**: latest **What happened**: There are more terminus task states than the ones we currently check in Docker Swarm Operator. This makes the operator run infinitely when the service goes into these states. **What you expected to happen**: The operator should terminate. **How to reproduce it**: Run a Airflow task via the Docker Swarm operator and return failed status code from it. **Anything else we need to know**: So as a fix I have added the complete list of tasks from the Docker reference (https://docs.docker.com/engine/swarm/how-swarm-mode-works/swarm-task-states/) We would like to send this patch back upstream to Apache Airflow.
https://github.com/apache/airflow/issues/14959
https://github.com/apache/airflow/pull/14960
6b78394617c7e699dda1acf42e36161d2fc29925
ab477176998090e8fb94d6f0e6bf056bad2da441
"2021-03-23T15:44:21Z"
python
"2021-04-07T12:39:43Z"
closed
apache/airflow
https://github.com/apache/airflow
14,957
[".github/workflows/ci.yml", ".pre-commit-config.yaml", "BREEZE.rst", "STATIC_CODE_CHECKS.rst", "breeze-complete", "scripts/ci/selective_ci_checks.sh", "scripts/ci/static_checks/eslint.sh"]
Run selective CI pipeline for UI-only PRs
For PRs that only touch files in `airflow/ui/` we'd like to run a selective set of CI actions. We only need linting and UI tests. Additionally, this update should pull the test runs out of the pre-commit.
https://github.com/apache/airflow/issues/14957
https://github.com/apache/airflow/pull/15009
a2d99293c9f5bdf1777fed91f1c48230111f53ac
7417f81d36ad02c2a9d7feb9b9f881610f50ceba
"2021-03-23T14:32:41Z"
python
"2021-03-31T22:10:00Z"
closed
apache/airflow
https://github.com/apache/airflow
14,924
["airflow/utils/cli.py", "airflow/utils/log/file_processor_handler.py", "airflow/utils/log/file_task_handler.py", "airflow/utils/log/non_caching_file_handler.py"]
Scheduler Memory Leak in Airflow 2.0.1
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.4 **Environment**: Dev - **OS** (e.g. from /etc/os-release): RHEL7 **What happened**: After running fine for some time my airflow tasks got stuck in scheduled state with below error in Task Instance Details: "All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless: - The scheduler is down or under heavy load If this task instance does not start soon please contact your Airflow administrator for assistance." **What you expected to happen**: I restarted the scheduler then it started working fine. When i checked my metrics i realized the scheduler has a memory leak and over past 4 days it has reached up to 6GB of memory utilization In version >2.0 we don't even have the run_duration config option to restart scheduler periodically to avoid this issue until it is resolved. **How to reproduce it**: I saw this issue in multiple dev instances of mine all running Airflow 2.0.1 on kubernetes with KubernetesExecutor. Below are the configs that i changed from the default config. max_active_dag_runs_per_dag=32 parallelism=64 dag_concurrency=32 sql_Alchemy_pool_size=50 sql_Alchemy_max_overflow=30 **Anything else we need to know**: The scheduler memory leaks occurs consistently in all instances i have been running. The memory utilization keeps growing for scheduler.
https://github.com/apache/airflow/issues/14924
https://github.com/apache/airflow/pull/18054
6acb9e1ac1dd7705d9bfcfd9810451dbb549af97
43f595fe1b8cd6f325d8535c03ee219edbf4a559
"2021-03-21T15:35:14Z"
python
"2021-09-09T10:50:45Z"
closed
apache/airflow
https://github.com/apache/airflow
14,888
["airflow/providers/amazon/aws/transfers/s3_to_redshift.py", "tests/providers/amazon/aws/transfers/test_s3_to_redshift.py"]
S3ToRedshiftOperator is not transaction safe for truncate
**Apache Airflow version**: 2.0.0 **Environment**: - **Cloud provider or hardware configuration**: AWS - **OS** (e.g. from /etc/os-release): Amazon Linux 2 **What happened**: The TRUNCATE operation has a fine print in Redshift that it is committing the transaction. See https://docs.aws.amazon.com/redshift/latest/dg/r_TRUNCATE.html > However, be aware that TRUNCATE commits the transaction in which it is run. and > The TRUNCATE command commits the transaction in which it is run; therefore, you can't roll back a TRUNCATE operation, and a TRUNCATE command may commit other operations when it commits itself. Currently with truncate=True, the operator would generate a statement like: ```sql BEGIN; TRUNCATE TABLE schema.table; -- this commits the transaction --- the table is now empty for any readers until the end of the copy COPY .... COMMIT; ``` **What you expected to happen**: Replacing with a DELETE operation would solve the problem, in a normal database it is not considered a fast operation but with Redshift, a 1B+ rows table is deleted in less than 5 seconds on a 2-node ra3.xlplus. (not counting vacuum or analyze) and a vacuum of the empty table taking less than 3 minutes. ```sql BEGIN; DELETE FROM schema.table; COPY .... COMMIT; ``` It should be mentioned in the documentation that a delete is done for the reason above and that vacuum and analyze operations are left to manage. **How often does this problem occur? Once? Every time etc?** Always.
https://github.com/apache/airflow/issues/14888
https://github.com/apache/airflow/pull/17117
32582b5bf1432e7c7603b959a675cf7edd76c9e6
f44d7bd9cfe00b1409db78c2a644516b0ab003e9
"2021-03-19T00:33:07Z"
python
"2021-07-21T16:33:22Z"
closed
apache/airflow
https://github.com/apache/airflow
14,880
["airflow/providers/slack/operators/slack.py", "tests/providers/slack/operators/test_slack.py"]
SlackAPIFileOperator is broken
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.1 **Environment**: Docker - **Cloud provider or hardware configuration**: Local file system - **OS** (e.g. from /etc/os-release): Arch Linux - **Kernel** (e.g. `uname -a`): 5.11.5-arch1-1 **What happened**: I tried to post a file from a long Python string to a Slack channel through the SlackAPIFileOperator. I defined the operator this way: ``` SlackAPIFileOperator( task_id="{}-notifier".format(self.task_id), channel="#alerts-metrics", token=MY_TOKEN, initial_comment=":warning: alert", filename="{{ ds }}.csv", filetype="csv", content=df.to_csv() ) ``` Task failed with the following error: ``` DEBUG - Sending a request - url: https://www.slack.com/api/files.upload, query_params: {}, body_params: {}, files: {}, json_body: {'channels': '#alerts-metrics', 'content': '<a long pandas.DataFrame.to_csv output>', 'filename': '{{ ds }}.csv', 'filetype': 'csv', 'initial_comment': ':warning: alert'}, headers: {'Content-Type': 'application/json;charset=utf-8', 'Authorization': '(redacted)', 'User-Agent': 'Python/3.6.12 slackclient/3.3.2 Linux/5.11.5-arch1-1'} DEBUG - Received the following response - status: 200, headers: {'date': 'Thu, 18 Mar 2021 13:28:44 GMT', 'server': 'Apache', 'x-xss-protection': '0', 'pragma': 'no-cache', 'cache-control': 'private, no-cache, no-store, must-revalidate', 'access-control-allow-origin': '*', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-slack-req-id': '0ff5fd17ca7e2e8397559b6347b34820', 'x-content-type-options': 'nosniff', 'referrer-policy': 'no-referrer', 'access-control-expose-headers': 'x-slack-req-id, retry-after', 'x-slack-backend': 'r', 'x-oauth-scopes': 'incoming-webhook,files:write,chat:write', 'x-accepted-oauth-scopes': 'files:write', 'expires': 'Mon, 26 Jul 1997 05:00:00 GMT', 'vary': 'Accept-Encoding', 'access-control-allow-headers': 'slack-route, x-slack-version-ts, x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, x-b3-flags', 'content-type': 'application/json; charset=utf-8', 'x-envoy-upstream-service-time': '37', 'x-backend': 'files_normal files_bedrock_normal_with_overflow files_canary_with_overflow files_bedrock_canary_with_overflow files_control_with_overflow files_bedrock_control_with_overflow', 'x-server': 'slack-www-hhvm-files-iad-xg4a', 'x-via': 'envoy-www-iad-xvw3, haproxy-edge-lhr-u1ge', 'x-slack-shared-secret-outcome': 'shared-secret', 'via': 'envoy-www-iad-xvw3', 'connection': 'close', 'transfer-encoding': 'chunked'}, body: {'ok': False, 'error': 'no_file_data'} [2021-03-18 13:28:43,601] {taskinstance.py:1455} ERROR - The request to the Slack API failed. The server responded with: {'ok': False, 'error': 'no_file_data'} ``` **What you expected to happen**: I expect the operator to succeed and see a new message in Slack with a snippet of a downloadable CSV file. **How to reproduce it**: Just declare a DAG this way: ``` from airflow import DAG from airflow.providers.slack.operators.slack import SlackAPIFileOperator from pendulum import datetime with DAG(dag_id="SlackFile", default_args=dict(start_date=datetime(2021, 1, 1), owner='airflow', catchup=False)) as dag: SlackAPIFileOperator( task_id="Slack", token=YOUR_TOKEN, content="test-content" ) ``` And try to run it. **Anything else we need to know**: This seems to be a known issue: https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1616079965083200 I workaround it with this following re-implementation: ``` from typing import Optional, Any from airflow import AirflowException from airflow.providers.slack.hooks.slack import SlackHook from airflow.providers.slack.operators.slack import SlackAPIOperator from airflow.utils.decorators import apply_defaults class SlackAPIFileOperator(SlackAPIOperator): """ Send a file to a slack channel Examples: .. code-block:: python slack = SlackAPIFileOperator( task_id="slack_file_upload", dag=dag, slack_conn_id="slack", channel="#general", initial_comment="Hello World!", file="hello_world.csv", filename="hello_world.csv", filetype="csv", content="hello,world,csv,file", ) :param channel: channel in which to sent file on slack name (templated) :type channel: str :param initial_comment: message to send to slack. (templated) :type initial_comment: str :param file: the file (templated) :type file: str :param filename: name of the file (templated) :type filename: str :param filetype: slack filetype. (templated) - see https://api.slack.com/types/file :type filetype: str :param content: file content. (templated) :type content: str """ template_fields = ('channel', 'initial_comment', 'file', 'filename', 'filetype', 'content') ui_color = '#44BEDF' @apply_defaults def __init__( self, channel: str = '#general', initial_comment: str = 'No message has been set!', file: Optional[str] = None, filename: str = 'default_name.csv', filetype: str = 'csv', content: Optional[str] = None, **kwargs, ) -> None: if (content is None) and (file is None): raise AirflowException('At least one of "content" or "file" should be defined.') self.method = 'files.upload' self.channel = channel self.initial_comment = initial_comment self.file = file self.filename = filename self.filetype = filetype self.content = content super().__init__(method=self.method, **kwargs) def execute(self, **kwargs): slack = SlackHook(token=self.token, slack_conn_id=self.slack_conn_id) args = dict( channels=self.channel, filename=self.filename, filetype=self.filetype, initial_comment=self.initial_comment ) if self.content is not None: args['content'] = self.content elif self.file is not None: args['file'] = self.content slack.call(self.method, data=args) def construct_api_call_params(self) -> Any: pass ``` Maybe it is not the best solution as it does not leverage work from `SlackAPIOperator`. But at least, it fullfill my use case.
https://github.com/apache/airflow/issues/14880
https://github.com/apache/airflow/pull/17247
797b515a23136d1f00c6bd938960882772c1c6bd
07c8ee01512b0cc1c4602e356b7179cfb50a27f4
"2021-03-18T16:07:03Z"
python
"2021-08-01T23:08:07Z"
closed
apache/airflow
https://github.com/apache/airflow
14,864
["airflow/exceptions.py", "airflow/utils/task_group.py", "tests/utils/test_task_group.py"]
Using TaskGroup without context manager (Graph view visual bug)
**Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a **What happened**: When I do not use the context manager for the task group and instead call the add function to add the tasks, those tasks show up on the Graph view. ![Screen Shot 2021-03-17 at 2 06 17 PM](https://user-images.githubusercontent.com/5952735/111544849-5939b200-8732-11eb-80dc-89c013aeb083.png) However, when I click on the task group item on the Graph UI, it will fix the issue. When I close the task group item, the tasks will not be displayed as expected. ![Screen Shot 2021-03-17 at 2 06 21 PM](https://user-images.githubusercontent.com/5952735/111544848-58a11b80-8732-11eb-928b-3c76207a0107.png) **What you expected to happen**: I expected the tasks inside the task group to not display on the Graph view. ![Screen Shot 2021-03-17 at 3 17 34 PM](https://user-images.githubusercontent.com/5952735/111545824-eaf5ef00-8733-11eb-99c2-75b051bfefe1.png) **How to reproduce it**: Render this DAG in Airflow ```python from airflow.models import DAG from airflow.operators.bash import BashOperator from airflow.operators.dummy import DummyOperator from airflow.utils.task_group import TaskGroup from datetime import datetime with DAG(dag_id="example_task_group", start_date=datetime(2021, 1, 1), tags=["example"], catchup=False) as dag: start = BashOperator(task_id="start", bash_command='echo 1; sleep 10; echo 2;') tg = TaskGroup("section_1", tooltip="Tasks for section_1") task_1 = DummyOperator(task_id="task_1") task_2 = BashOperator(task_id="task_2", bash_command='echo 1') task_3 = DummyOperator(task_id="task_3") tg.add(task_1) tg.add(task_2) tg.add(task_3) ```
https://github.com/apache/airflow/issues/14864
https://github.com/apache/airflow/pull/23071
9caa511387f92c51ab4fc42df06e0a9ba777e115
337863fa35bba8463d62e5cf0859f2bb73cf053a
"2021-03-17T22:25:05Z"
python
"2022-06-05T13:52:02Z"
closed
apache/airflow
https://github.com/apache/airflow
14,830
["airflow/api_connexion/endpoints/role_and_permission_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/security/permissions.py", "tests/api_connexion/endpoints/test_role_and_permission_endpoint.py"]
Add Create/Update/Delete API endpoints for Roles
To be able to fully manage the permissions in the UI we will need to be able to modify the roles and the permissions they have. It probably makes sense to have one PR that adds CUD (Read is already done) endpoints for Roles. Permissions are not createable via anything but code, so we only need these endpoints for Roles, but not Permissions.
https://github.com/apache/airflow/issues/14830
https://github.com/apache/airflow/pull/14840
266384a63f4693b667f308d49fcbed9a10a41fce
6706b67fecc00a22c1e1d6658616ed9dd96bbc7b
"2021-03-16T10:58:54Z"
python
"2021-04-05T09:22:43Z"
closed
apache/airflow
https://github.com/apache/airflow
14,811
["setup.cfg"]
Latest SQLAlchemy (1.4) Incompatible with latest sqlalchemy_utils
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): Mac OS Big Sur - **Kernel** (e.g. `uname -a`): - **Install tools**: pip 20.1.1 - **Others**: **What happened**: Our CI environment broke due to the release of SQLAlchemy 1.4, which is incompatible with the latest version of sqlalchemy-utils. ([Related issue](https://github.com/kvesteri/sqlalchemy-utils/issues/505)) Partial stacktrace: ``` File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/airflow/www/utils.py", line 27, in <module> from flask_appbuilder.models.sqla.interface import SQLAInterface File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 16, in <module> from sqlalchemy_utils.types.uuid import UUIDType File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/__init__.py", line 1, in <module> from .aggregates import aggregated # noqa File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/aggregates.py", line 372, in <module> from .functions.orm import get_column_key File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/__init__.py", line 1, in <module> from .database import ( # noqa File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/database.py", line 11, in <module> from .orm import quote File "/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy_utils/functions/orm.py", line 14, in <module> from sqlalchemy.orm.query import _ColumnEntity ImportError: cannot import name '_ColumnEntity' from 'sqlalchemy.orm.query' (/Users/samwheating/Desktop/tmp_venv/lib/python3.7/site-packages/sqlalchemy/orm/query.py) ``` I'm not sure what the typical procedure is in the case of breaking changes to dependencies, but seeing as there's an upcoming release I thought it might be worth pinning sqlalchemy to 1.3.x? (Or pin the version of sqlalchemy-utils to a compatible version if one is released before Airflow 2.0.2) **What you expected to happen**: `airflow db init` to run successfully. <!-- What do you think went wrong? --> **How to reproduce it**: 1) Create a new virtualenv 2) `pip install apache-airflow` 3) `airflow db init` <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. --->
https://github.com/apache/airflow/issues/14811
https://github.com/apache/airflow/pull/14812
251eb7d170db3f677e0c2759a10ac1e31ac786eb
c29f6fb76b9d87c50713ae94fda805b9f789a01d
"2021-03-15T19:39:29Z"
python
"2021-03-15T20:28:06Z"
closed
apache/airflow
https://github.com/apache/airflow
14,807
["airflow/ui/package.json", "airflow/ui/src/components/AppContainer/AppHeader.tsx", "airflow/ui/src/components/AppContainer/TimezoneDropdown.tsx", "airflow/ui/src/components/MultiSelect.tsx", "airflow/ui/src/providers/TimezoneProvider.tsx", "airflow/ui/src/views/Pipelines/PipelinesTable.tsx", "airflow/ui/src/views/Pipelines/index.tsx", "airflow/ui/test/TimezoneDropdown.test.tsx", "airflow/ui/test/utils.tsx", "airflow/ui/yarn.lock"]
Design/build timezone switcher modal
- Once we have the current user's preference set and available in Context, add a modal that allows the preferred display timezone to be changed. - Modal will be triggered by clicking the time/TZ in the global navigation.
https://github.com/apache/airflow/issues/14807
https://github.com/apache/airflow/pull/15674
46d62782e85ff54dd9dc96e1071d794309497983
3614910b4fd32c90858cd9731fc0421078ca94be
"2021-03-15T15:14:24Z"
python
"2021-05-07T17:49:37Z"
closed
apache/airflow
https://github.com/apache/airflow
14,755
["tests/jobs/test_backfill_job.py"]
[QUARANTINE] Backfill depends on past test is flaky
Test backfill_depends_on_past is flaky. The whole Backfill class was in Heisentest but I believe this is the only one that is problematic now so I remove the class from heisentests and move the depends_on_past to quarantine. <!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: <!-- (please include exact error messages if you can) --> **What you expected to happen**: <!-- What do you think went wrong? --> **How to reproduce it**: <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14755
https://github.com/apache/airflow/pull/19862
5ebd63a31b5bc1974fc8974f137b9fdf0a5f58aa
a804666347b50b026a8d3a1a14c0b2e27a369201
"2021-03-13T13:00:28Z"
python
"2021-11-30T12:59:42Z"
closed
apache/airflow
https://github.com/apache/airflow
14,726
[".pre-commit-config.yaml", "BREEZE.rst", "STATIC_CODE_CHECKS.rst", "airflow/ui/package.json", "airflow/ui/yarn.lock", "breeze-complete"]
Add precommit linting and testing to the new /ui
**Description** We just initialized the new UI for AIP-38 under `/airflow/ui`. To continue development, it would be best to add a pre-commit hook to run the linting and testing commands for the new project. **Use case / motivation** The new UI already has linting and testing setup with `yarn lint` and `yarn test`. We just need a pre-commit hook for them. **Are you willing to submit a PR?** Yes **Related Issues** no
https://github.com/apache/airflow/issues/14726
https://github.com/apache/airflow/pull/14836
5f774fae530577e302c153cc8726c93040ebbde0
e395fcd247b8aa14dbff2ee979c1a0a17c42adf4
"2021-03-11T16:18:27Z"
python
"2021-03-16T23:06:26Z"
closed
apache/airflow
https://github.com/apache/airflow
14,682
["airflow/providers/amazon/aws/transfers/local_to_s3.py", "airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py", "airflow/providers/google/cloud/transfers/s3_to_gcs.py", "tests/providers/amazon/aws/transfers/test_local_to_s3.py"]
The S3ToGCSOperator fails on templated `dest_gcs` URL
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: Docker **What happened**: When passing a templatized `dest_gcs` argument to the `S3ToGCSOperator` operator, the DAG fails to import because the constructor attempts to test the validity of the URL before the template has been populated in `execute`. The error is: ``` Broken DAG: [/opt/airflow/dags/bad_gs_dag.py] Traceback (most recent call last): File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1051, in gcs_object_is_directory _, blob = _parse_gcs_url(bucket) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/gcs.py", line 1063, in _parse_gcs_url raise AirflowException('Please provide a bucket name') airflow.exceptions.AirflowException: Please provide a bucket name ``` **What you expected to happen**: The DAG should successfully parse when using a templatized `dest_gcs` value. **How to reproduce it**: Instantiating a `S3ToGCSOperator` task with `dest_gcs="{{ var.gcs_url }}"` fails. <details> ```python from airflow.decorators import dag from airflow.utils.dates import days_ago from airflow.providers.google.cloud.transfers.s3_to_gcs import S3ToGCSOperator @dag( schedule_interval=None, description="Demo S3-to-GS Bug", catchup=False, start_date=days_ago(1), ) def demo_bug(): S3ToGCSOperator( task_id="transfer_task", bucket="example_bucket", prefix="fake/prefix", dest_gcs="{{ var.gcs_url }}", ) demo_dag = demo_bug() ``` </details> **Anything else we need to know**: Should be fixable by moving the code that evaluates whether the URL is a folder to `execute()`.
https://github.com/apache/airflow/issues/14682
https://github.com/apache/airflow/pull/19048
efdfd15477f92da059fa86b4fa18b6f29cb97feb
3c08c025c5445ffc0533ac28d07ccf2e69a19ca8
"2021-03-09T14:44:14Z"
python
"2021-10-27T06:15:00Z"
closed
apache/airflow
https://github.com/apache/airflow
14,675
["airflow/utils/helpers.py", "tests/utils/test_helpers.py"]
TriggerDagRunOperator OperatorLink doesn't work when HTML base url doesn't match the Airflow base url
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a **What happened**: When I click on the "Triggered DAG" Operator Link in TriggerDagRunOperator, I am redirected with a relative link. ![Screen Shot 2021-03-08 at 4 28 16 PM](https://user-images.githubusercontent.com/5952735/110399833-61f00100-802b-11eb-9399-146bfcf8627c.png) The redirect uses the HTML base URL and not the airflow base URL. This is only an issue if the URLs do not match. **What you expected to happen**: I expect the link to take me to the Triggered DAG tree view (default view) instead of the base url of the service hosting the webserver. **How to reproduce it**: Create an airflow deployment where the HTML base url doesn't match the airflow URL.
https://github.com/apache/airflow/issues/14675
https://github.com/apache/airflow/pull/14990
62aa7965a32f1f8dde83cb9c763deef5b234092b
aaa3bf6b44238241bd61178426b692df53770c22
"2021-03-09T01:03:33Z"
python
"2021-04-11T11:51:59Z"
closed
apache/airflow
https://github.com/apache/airflow
14,597
["airflow/models/taskinstance.py", "docs/apache-airflow/concepts/connections.rst", "docs/apache-airflow/macros-ref.rst", "tests/models/test_taskinstance.py"]
Provide jinja template syntax to access connections
**Description** Expose the connection into the jinja template context via `conn.value.<connectionname>.{host,port,login,password,extra_config,etc}` Today is possible to conveniently access [airflow's variables](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#variables) in jinja templates using `{{ var.value.<variable_name> }}`. There is no equivalent (to my knowledge for [connections](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#connections)), I understand that most of the time connection are used programmatically in Operators and Hooks source code, but there are use cases where the connection info has to be pass as parameters to the operators and then it becomes cumbersome to do it without jinja template syntax. I seen workarounds like using [user defined macros to provide get_login(my_conn_id)](https://stackoverflow.com/questions/65826404/use-airflow-connection-from-a-jinja-template/65873023#65873023 ), but I'm after a consistent interface for accessing both variables and connections in the same way **Workaround** The following `user_defined_macro` (from my [stackoverflow answer](https://stackoverflow.com/a/66471911/90580)) provides the suggested syntax `connection.mssql.host` where `mssql` is the connection name: ``` class ConnectionGrabber: def __getattr__(self, name): return Connection.get_connection_from_secrets(name) dag = DAG( user_defined_macros={'connection': ConnectionGrabber()}, ...) task = BashOperator(task_id='read_connection', bash_command='echo {{connection.mssql.host }}', dag=dag) ``` This macro can be added to each DAG individually or to all DAGs via an Airflow's Plugin. What I suggest is to make this macro part of the default. **Use case / motivation** For example, passing credentials to a [KubernetesPodOperator](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#howto-operator-kubernetespodoperator) via [env_vars](https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator) today has to be done like this: ``` connection = Connection.get_connection_from_secrets('somecredentials') k = KubernetesPodOperator( task_id='task1', env_vars={'MY_VALUE': '{{ var.value.my_value }}', 'PWD': conn.password,}, ) ``` where I would prefer to use consistent syntax for both variables and connections like this: ``` # not needed anymore: connection = Connection.get_connection_from_secrets('somecredentials') k = KubernetesPodOperator( task_id='task1', env_vars={'MY_VALUE': '{{ var.value.my_value }}', 'PWD': '{{ conn.somecredentials.password }}',}, ) ``` The same applies to `BashOperator` where I sometimes feel the need to pass connection information to the templated script. **Are you willing to submit a PR?** yes, I can write the PR. **Related Issues**
https://github.com/apache/airflow/issues/14597
https://github.com/apache/airflow/pull/16686
5034414208f85a8be61fe51d6a3091936fe402ba
d3ba80a4aa766d5eaa756f1fa097189978086dac
"2021-03-04T07:51:09Z"
python
"2021-06-29T10:50:31Z"
closed
apache/airflow
https://github.com/apache/airflow
14,592
["airflow/configuration.py", "airflow/models/connection.py", "airflow/models/variable.py", "tests/core/test_configuration.py"]
Unreachable Secrets Backend Causes Web Server Crash
**Apache Airflow version**: 1.10.12 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a **Environment**: - **Cloud provider or hardware configuration**: Amazon MWAA - **OS** (e.g. from /etc/os-release): Amazon Linux (latest) - **Kernel** (e.g. `uname -a`): n/a - **Install tools**: n/a **What happened**: If an unreachable secrets.backend is specified in airflow.cfg the web server crashes **What you expected to happen**: An invalid secrets backend should be ignored with a warning, and the system should default back to the metadatabase secrets **How to reproduce it**: In an environment without access to AWS Secrets Manager, add the following to your airflow.cfg: ``` [secrets] backend = airflow.contrib.secrets.aws_secrets_manager.SecretsManagerBackend ``` **or** an environment without access to SSM specifiy: ``` [secrets] backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend ``` Reference: https://airflow.apache.org/docs/apache-airflow/1.10.12/howto/use-alternative-secrets-backend.html#aws-ssm-parameter-store-secrets-backend
https://github.com/apache/airflow/issues/14592
https://github.com/apache/airflow/pull/16404
4d4830599578ae93bb904a255fb16b81bd471ef1
0abbd2d918ad9027948fd8a33ebb42487e4aa000
"2021-03-03T23:17:03Z"
python
"2021-08-27T20:59:15Z"
closed
apache/airflow
https://github.com/apache/airflow
14,563
["airflow/example_dags/example_external_task_marker_dag.py", "airflow/models/dag.py", "airflow/sensors/external_task.py", "docs/apache-airflow/howto/operator/external_task_sensor.rst", "tests/sensors/test_external_task_sensor.py"]
TaskGroup Sensor
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** Enable the ability for a task in a DAG to wait upon the successful completion of an entire TaskGroup. **Use case / motivation** TaskGroups provide a great mechanism for authoring DAGs, however there are situations where it might be necessary for a task in an external DAG to wait upon the the completion of the TaskGroup as a whole. At the moment this is only possible with one of the following workarounds: 1. Add an external task sensor for each task in the group. 2. Add a Dummy task after the TaskGroup which the external task sensor waits on. I would envisage either adapting `ExternalTaskSensor` to also work with TaskGroups or creating a new `ExternalTaskGroupSensor`. **Are you willing to submit a PR?** Time permitting, yes! **Related Issues** <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/14563
https://github.com/apache/airflow/pull/24902
0eb0b543a9751f3d458beb2f03d4c6ff22fcd1c7
bc04c5ff0fa56e80d3d5def38b798170f6575ee8
"2021-03-02T14:22:22Z"
python
"2022-08-22T18:13:09Z"
closed
apache/airflow
https://github.com/apache/airflow
14,518
["airflow/cli/commands/cheat_sheet_command.py", "airflow/cli/commands/info_command.py", "airflow/cli/simple_table.py"]
Airflow info command doesn't work properly with pbcopy on Mac OS
Hello, Mac OS has a command for copying data to the clipboard - `pbcopy`. Unfortunately, with the [introduction of more fancy tables](https://github.com/apache/airflow/pull/12689) to this command, we can no longer use it together. For example: ```bash airflow info | pbcopy ``` <details> <summary>Clipboard content</summary> ``` Apache Airflow: 2.1.0.dev0 System info | Mac OS | x86_64 | uname_result(system='Darwin', node='Kamils-MacBook-Pro.local', | release='20.3.0', version='Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 | PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64', machine='x86_64', | processor='i386') | (None, 'UTF-8') | 3.8.7 (default, Feb 14 2021, 09:58:39) [Clang 12.0.0 (clang-1200.0.32.29)] | /Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/bin/python3.8 Tools info | git version 2.24.3 (Apple Git-128) | OpenSSH_8.1p1, LibreSSL 2.7.3 | Client Version: v1.19.3 | Google Cloud SDK 326.0.0 | NOT AVAILABLE | NOT AVAILABLE | 3.32.3 2020-06-18 14:16:19 | 02c344aceaea0d177dd42e62c8541e3cab4a26c757ba33b3a31a43ccc7d4aapl | psql (PostgreSQL) 13.2 Paths info | /Users/kamilbregula/airflow | /Users/kamilbregula/.pyenv/versions/airflow-py38/bin:/Users/kamilbregula/.pye | v/libexec:/Users/kamilbregula/.pyenv/plugins/python-build/bin:/Users/kamilbre | ula/.pyenv/plugins/pyenv-virtualenv/bin:/Users/kamilbregula/.pyenv/plugins/py | nv-update/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-installer/bin:/Users/k | milbregula/.pyenv/plugins/pyenv-doctor/bin:/Users/kamilbregula/.pyenv/plugins | python-build/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-virtualenv/bin:/Use | s/kamilbregula/.pyenv/plugins/pyenv-update/bin:/Users/kamilbregula/.pyenv/plu | ins/pyenv-installer/bin:/Users/kamilbregula/.pyenv/plugins/pyenv-doctor/bin:/ | sers/kamilbregula/.pyenv/plugins/pyenv-virtualenv/shims:/Users/kamilbregula/. | yenv/shims:/Users/kamilbregula/.pyenv/bin:/usr/local/opt/gnu-getopt/bin:/usr/ | ocal/opt/[email protected]/bin:/usr/local/opt/[email protected]/bin:/usr/local/opt/ope | ssl/bin:/Users/kamilbregula/Library/Python/2.7/bin/:/Users/kamilbregula/bin:/ | sers/kamilbregula/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin | /sbin:/Users/kamilbregula/.cargo/bin | /Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/bin:/Users/kamilb | egula/.pyenv/versions/3.8.7/lib/python38.zip:/Users/kamilbregula/.pyenv/versi | ns/3.8.7/lib/python3.8:/Users/kamilbregula/.pyenv/versions/3.8.7/lib/python3. | /lib-dynload:/Users/kamilbregula/.pyenv/versions/3.8.7/envs/airflow-py38/lib/ | ython3.8/site-packages:/Users/kamilbregula/devel/airflow/airflow:/Users/kamil | regula/airflow/dags:/Users/kamilbregula/airflow/config:/Users/kamilbregula/ai | flow/plugins | True Config info | SequentialExecutor | airflow.utils.log.file_task_handler.FileTaskHandler | sqlite:////Users/kamilbregula/airflow/airflow.db | /Users/kamilbregula/airflow/dags | /Users/kamilbregula/airflow/plugins | /Users/kamilbregula/airflow/logs Providers info | 1.2.0 | 1.0.1 | 1.0.1 | 1.1.0 | 1.0.1 | 1.0.2 | 1.0.1 | 1.0.1 | 1.0.1 | 1.0.1 | 1.0.2 | 1.0.1 | 1.0.1 | 1.0.1 | 1.0.2 | 1.0.1 | 1.0.1 | 1.0.2 | 1.0.1 | 1.0.2 | 1.0.2 | 1.1.1 | 1.0.1 | 1.0.1 | 2.1.0 | 1.0.1 | 1.0.1 | 1.1.1 | 1.0.1 | 1.0.1 | 1.1.0 | 1.0.1 | 1.2.0 | 1.0.1 | 1.0.1 | 1.0.1 | 1.0.2 | 1.0.1 | 1.0.1 | 1.1.1 | 1.0.1 | 1.0.1 | 1.0.1 | 1.0.2 | 1.0.1 | 1.0.1 | 1.0.2 | 1.0.2 | 1.0.1 | 2.0.0 | 1.0.1 | 1.0.1 | 1.0.2 | 1.1.1 | 1.0.1 | 3.0.0 | 1.0.2 | 1.2.0 | 1.0.0 | 1.0.2 | 1.0.1 | 1.0.1 | 1.0.1 ``` </details> CC: @turbaszek
https://github.com/apache/airflow/issues/14518
https://github.com/apache/airflow/pull/14528
1b0851c9b75f0d0a15427898ae49a2f67d076f81
a1097f6f29796bd11f8ed7b3651dfeb3e40eec09
"2021-02-27T21:07:49Z"
python
"2021-02-28T15:42:33Z"
closed
apache/airflow
https://github.com/apache/airflow
14,517
["airflow/cli/cli_parser.py", "airflow/cli/simple_table.py", "docs/apache-airflow/usage-cli.rst", "docs/spelling_wordlist.txt"]
The tables are not parsable by standard linux utilities.
Hello, I changed the format of the tables a long time ago so that they could be parsed in standard Linux tools such as AWK. https://github.com/apache/airflow/pull/8409 For example, to list the files that contain the DAG, I could run the command below. ``` airflow dags list | grep -v "dag_id" | awk '{print $2}' | sort | uniq ``` To pause all dags: ```bash airflow dags list | awk '{print $1}' | grep -v "dag_id"| xargs airflow dags pause ``` Unfortunately [that has changed](https://github.com/apache/airflow/pull/12704) and we now have more fancy tables, but harder to use in standard Linux tools. Alternatively, we can use JSON output, but I don't always have JQ installed on production environment, so performing administrative tasks is difficult for me. ```bash $ docker run apache/airflow:2.0.1 bash -c "jq" /bin/bash: jq: command not found ``` Best regards, Kamil Breguła CC: @turbaszek
https://github.com/apache/airflow/issues/14517
https://github.com/apache/airflow/pull/14546
8801a0cc3b39cf3d2a3e5ef6af004d763bdb0b93
0ef084c3b70089b9b061090f7d88ce86e3651ed4
"2021-02-27T20:56:59Z"
python
"2021-03-02T19:12:53Z"
closed
apache/airflow
https://github.com/apache/airflow
14,489
["airflow/providers/ssh/CHANGELOG.rst", "airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/provider.yaml"]
Add a retry with wait interval for SSH operator
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** currently SSH operator fails if authentication fails without retying. We can add two more parameters as mentioned below which will retry in case of Authentication failure after waiting for configured time. - max_retry - maximum time SSH operator should retry in case of exception - wait - how many seconds it should wait before next retry <!-- A short description of your feature --> **Use case / motivation** We are using SSH operator heavily in our production jobs, And what I have noticed is sometimes SSH operator fails to authenticate, however open re-running jobs it run successfully. And this happens ofthen. We have ended up writing our own custom operator for this. However, if we can implement this, this could help others as well. <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> Implement the suggested feature for ssh operator. **Are you willing to submit a PR?** I will submit the PR if feature gets approval. <!--- We accept contributions! --> **Related Issues** N/A <!-- Is there currently another issue associated with this? --> No
https://github.com/apache/airflow/issues/14489
https://github.com/apache/airflow/pull/19981
4a73d8f3d1f0c2cb52707901f9e9a34198573d5e
b6edc3bfa1ed46bed2ae23bb2baeefde3f9a59d3
"2021-02-26T21:22:34Z"
python
"2022-02-01T09:30:09Z"
closed
apache/airflow
https://github.com/apache/airflow
14,486
["airflow/www/static/js/tree.js"]
tree view task instances have too much left padding in webserver UI
**Apache Airflow version**: 2.0.1 Here is tree view of a dag with one task: ![image](https://user-images.githubusercontent.com/15932138/109343250-e370b380-7821-11eb-9cff-5e1a5ef5fd44.png) For some reason the task instances render partially off the page, and there's a large amount of empty space that could have been used instead. **Environment** MacOS Chrome
https://github.com/apache/airflow/issues/14486
https://github.com/apache/airflow/pull/14566
8ef862eee6443cc2f34f4cc46425357861e8b96c
3f7ebfdfe2a1fa90b0854028a5db057adacd46c1
"2021-02-26T19:02:06Z"
python
"2021-03-04T00:00:12Z"
closed
apache/airflow
https://github.com/apache/airflow
14,481
["airflow/api_connexion/schemas/dag_schema.py", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"]
DAG /details endpoint returning empty array objects
When testing the following two endpoints, I get different results for the array of owners and tags. The former should be identical to the response of the latter endpoint. `/api/v1/dags/{dag_id}/details`: ```json { "owners": [], "tags": [ {}, {} ], } ``` `/api/v1/dags/{dag_id}`: ```json { "owners": [ "airflow" ], "tags": [ { "name": "example" }, { "name": "example2" } ] } ```
https://github.com/apache/airflow/issues/14481
https://github.com/apache/airflow/pull/14490
9c773bbf0174a8153720d594041f886b2323d52f
4424d10f05fa268b54c81ef8b96a0745643690b6
"2021-02-26T14:59:56Z"
python
"2021-03-03T14:39:02Z"
closed
apache/airflow
https://github.com/apache/airflow
14,473
["airflow/www/static/js/tree.js"]
DagRun duration not visible in tree view tooltip if not currently running
On airflow 2.0.1 On tree view if dag run is running, duration shows as expected: ![image](https://user-images.githubusercontent.com/15932138/109248646-086e1380-779b-11eb-9d00-8cb785d88299.png) But if dag run is complete, duration is null: ![image](https://user-images.githubusercontent.com/15932138/109248752-3ce1cf80-779b-11eb-8784-9a4aaed2209b.png)
https://github.com/apache/airflow/issues/14473
https://github.com/apache/airflow/pull/14566
8ef862eee6443cc2f34f4cc46425357861e8b96c
3f7ebfdfe2a1fa90b0854028a5db057adacd46c1
"2021-02-26T02:58:19Z"
python
"2021-03-04T00:00:12Z"
closed
apache/airflow
https://github.com/apache/airflow
14,469
["setup.cfg"]
Upgrade Flask-AppBuilder to 3.2.0 for improved OAUTH/LDAP
Version `3.2.0` of Flask-AppBuilder added support for LDAP group binding (see PR: https://github.com/dpgaspar/Flask-AppBuilder/pull/1374), we should update mainly for the `AUTH_ROLES_MAPPING` feature, which lets users bind to RBAC roles based on their LDAP/OAUTH group membership. Here are the docs about Flask-AppBuilder security: https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap This will resolve https://github.com/apache/airflow/issues/8179
https://github.com/apache/airflow/issues/14469
https://github.com/apache/airflow/pull/14665
b718495e4caecb753742c3eb22919411a715f24a
97b5e4cd6c001ec1a1597606f4e9f1c0fbea20d2
"2021-02-25T23:00:08Z"
python
"2021-03-08T17:12:05Z"
closed
apache/airflow
https://github.com/apache/airflow
14,422
["airflow/jobs/local_task_job.py", "tests/jobs/test_local_task_job.py"]
on_failure_callback does not seem to fire on pod deletion/eviction
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.16.x **Environment**: KubernetesExecutor with single scheduler pod **What happened**: On all previous versions we used (from 1.10.x to 2.0.0), evicting or deleting a running task pod triggered the `on_failure_callback` from `BaseOperator`. We use this functionality quite a lot to detect eviction and provide work carry-over and automatic task clear. We've recently updated our dev environment to 2.0.1 and it seems that now `on_failure_callback` is only fired when pod completes naturally, i.e. not evicted / deleted with kubectl Everything looks the same on task log level when pod is removed with `kubectl delete pod...`: ``` Received SIGTERM. Terminating subprocesses Sending Signals.SIGTERM to GPID 16 Received SIGTERM. Terminating subprocesses. Task received SIGTERM signal Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1315, in _execute_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/li_operator.py", line 357, in execute self.operator_task_code(context) File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/yarn_jar_operator.py", line 62, in operator_task_code ssh_connection=_ssh_con File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/li_mapreduce_cluster_operator.py", line 469, in watch_application existing_apps=_associated_applications.keys() File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/li_mapreduce_cluster_operator.py", line 376, in get_associated_application_info logger=self.log File "/usr/local/lib/python3.7/site-packages/li_airflow_common/custom_operators/mapreduce/yarn_api/yarn_api_ssh_client.py", line 26, in send_request _response = requests.get(request) File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 392, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/local/lib/python3.7/http/client.py", line 1277, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/local/lib/python3.7/http/client.py", line 1323, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/local/lib/python3.7/http/client.py", line 1272, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/local/lib/python3.7/http/client.py", line 1032, in _send_output self.send(msg) File "/usr/local/lib/python3.7/http/client.py", line 972, in send self.connect() File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 187, in connect conn = self._new_conn() File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1241, in signal_handler raise AirflowException("Task received SIGTERM signal") airflow.exceptions.AirflowException: Task received SIGTERM signal Marking task as FAILED. dag_id=mock_dag_limr, task_id=SetupMockScaldingDWHJob, execution_date=20190910T000000, start_date=20210224T162811, end_date=20210224T163044 Process psutil.Process(pid=16, status='terminated', exitcode=1, started='16:28:10') (16) terminated with exit code 1 ``` But `on_failure_callback` is not triggered. For simplicity, let's assume the callback does this: ``` def act_on_failure(context): send_slack_message( message=f"{context['task_instance_key_str']} fired failure callback", channel=get_stored_variable('slack_log_channel') ) def get_stored_variable(variable_name, deserialize=False): try: return Variable.get(variable_name, deserialize_json=deserialize) except KeyError: if os.getenv('PYTEST_CURRENT_TEST'): _root_dir = str(Path(__file__).parent) _vars_path = os.path.join(_root_dir, "vars.json") _vars_json = json.loads(open(_vars_path, 'r').read()) if deserialize: return _vars_json.get(variable_name, {}) else: return _vars_json.get(variable_name, "") else: raise def send_slack_message(message, channel): _web_hook_url = get_stored_variable('slack_web_hook') post = { "text": message, "channel": channel } try: json_data = json.dumps(post) req = request.Request( _web_hook_url, data=json_data.encode('ascii'), headers={'Content-Type': 'application/json'} ) request.urlopen(req) except request.HTTPError as em: print('Failed to send slack messsage to the hook {hook}: {msg}, request: {req}'.format( hook=_web_hook_url, msg=str(em), req=str(post) )) ``` Scheduler logs related to this event: ``` 21-02-24 16:33:04,968] {kubernetes_executor.py:147} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 had an event of type MODIFIED [2021-02-24 16:33:04,968] {kubernetes_executor.py:202} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 Pending [2021-02-24 16:33:04,979] {kubernetes_executor.py:147} INFO - Event: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3 had an event of type DELETED [2021-02-24 16:33:04,979] {kubernetes_executor.py:197} INFO - Event: Failed to start pod mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3, will reschedule [2021-02-24 16:33:05,406] {kubernetes_executor.py:354} INFO - Attempting to finish pod; pod_id: mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3; state: up_for_reschedule; annotations: {'dag_id': 'mock_dag_limr', 'task_id': 'SetupMockSparkDwhJob', 'execution_date': '2019-09-10T00:00:00+00:00', 'try_number': '9'} [2021-02-24 16:33:05,419] {kubernetes_executor.py:528} INFO - Changing state of (TaskInstanceKey(dag_id='mock_dag_limr', task_id='SetupMockSparkDwhJob', execution_date=datetime.datetime(2019, 9, 10, 0, 0, tzinfo=tzlocal()), try_number=9), 'up_for_reschedule', 'mockdaglimrsetupmocksparkdwhjob.791032759a764d8bae66fc7bd7ab2db3', 'airflow', '173647183') to up_for_reschedule [2021-02-24 16:33:05,422] {scheduler_job.py:1206} INFO - Executor reports execution of mock_dag_limr.SetupMockSparkDwhJob execution_date=2019-09-10 00:00:00+00:00 exited with status up_for_reschedule for try_number 9 ``` However task stays in failed state (not what scheduler says) When pod completes on its own (fails, exits with 0), callbacks are triggered correctly **What you expected to happen**: `on_failure_callback` is called regardless of how pod exists, including SIGTERM-based interruptions: pod eviction, pod deletion <!-- What do you think went wrong? --> Not sure really. We believe this code is executed since we get full stack trace https://github.com/apache/airflow/blob/2.0.1/airflow/models/taskinstance.py#L1149 But then it is unclear why `finally` clause here does not run: https://github.com/apache/airflow/blob/master/airflow/models/taskinstance.py#L1422 **How to reproduce it**: With Airflow 2.0.1 running KubernetesExecutor, execute `kubectl delete ...` on any running task pod. Task operator should define `on_failure_callback`. In order to check that it is/not called, send data from it to any external logging system **Anything else we need to know**: Problem is persistent and only exists in 2.0.1 version
https://github.com/apache/airflow/issues/14422
https://github.com/apache/airflow/pull/15172
e5d69ad6f2d25e652bb34b6bcf2ce738944de407
def1e7c5841d89a60f8972a84b83fe362a6a878d
"2021-02-24T16:55:21Z"
python
"2021-04-23T22:47:20Z"
closed
apache/airflow
https://github.com/apache/airflow
14,421
["airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"]
NULL values in the operator column of task_instance table cause API validation failures
**Apache Airflow version**: 2.0.1 **Environment**: Docker on Linux Mint 20.1, image based on apache/airflow:2.0.1-python3.8 **What happened**: I'm using the airflow API and the following exception occurred: ```python >>> import json >>> import requests >>> from requests.auth import HTTPBasicAuth >>> payload = {"dag_ids": ["{my_dag_id}"]} >>> r = requests.post("https://localhost:8080/api/v1/dags/~/dagRuns/~/taskInstances/list", auth=HTTPBasicAuth('username', 'password'), data=json.dumps(payload), headers={'Content-Type': 'application/json'}) >>> r.status_code 500 >>> print(r.text) { "detail": "None is not of type 'string'\n\nFailed validating 'type' in schema['allOf'][0]['properties'][ 'task_instances']['items']['properties']['operator']:\n {'type': 'string'}\n\nOn instance['task_instanc es'][5]['operator']:\n None", "status": 500, "title": "Response body does not conform to specification", "type": "https://airflow.apache.org/docs/2.0.1/stable-rest-api-ref.html#section/Errors/Unknown" } None is not of type 'string' Failed validating 'type' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['ope rator']: {'type': 'string'} On instance['task_instances'][5]['operator']: None ``` This happens on all the "old" task instances before upgrading to 2.0.0 There is no issue with new task instances created after the upgrade. <!-- (please include exact error messages if you can) --> **What do you think went wrong?**: The `operator` column was introduced in 2.0.0. But during migration, all the existing database entries are filled with `NULL` values. So I had to execute this manually in my database ```sql UPDATE task_instance SET operator = 'NoOperator' WHERE operator IS NULL; ``` **How to reproduce it**: * Run airflow 1.10.14 * Create a DAG with multiple tasks and run them * Upgrade airflow to 2.0.0 or 2.0.1 * Make the API call as above **Anything else we need to know**: Similar to https://github.com/apache/airflow/issues/13799 but not exactly the same
https://github.com/apache/airflow/issues/14421
https://github.com/apache/airflow/pull/16516
60925453b1da9fe54ca82ed59889cd65a0171516
087556f0c210e345ac1749933ff4de38e40478f6
"2021-02-24T15:24:05Z"
python
"2021-06-18T07:56:05Z"
closed
apache/airflow
https://github.com/apache/airflow
14,364
["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"]
Missing schedule_delay metric
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.0 but applicable to master **Environment**: Running on ECS but not relevant to question **What happened**: I am not seeing the metric dagrun.schedule_delay.<dag_id> being reported. A search in the codebase seems to reveal that it no longer exists. It was originally added in https://github.com/apache/airflow/pull/5050. <!-- (please include exact error messages if you can) --> <!-- What do you think went wrong? --> I suspect either: 1. This metric was intentionally removed, in which case the docs should be updated to remove it. 2. It was unintentionally removed during a refactor, in which case we should add it back. 3. I am bad at searching through code, and someone could hopefully point me to where it is reported from now. **How to reproduce it**: https://github.com/apache/airflow/search?q=schedule_delay <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14364
https://github.com/apache/airflow/pull/15105
441b4ef19f07d8c72cd38a8565804e56e63b543c
ca4c4f3d343dea0a034546a896072b9c87244e71
"2021-02-22T18:18:44Z"
python
"2021-03-31T12:38:14Z"
closed
apache/airflow
https://github.com/apache/airflow
14,331
["airflow/api_connexion/openapi/v1.yaml", "airflow/utils/state.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"]
Airflow stable API taskInstance call fails if a task is removed from running DAG
**Apache Airflow version**: 2.0.1 **Environment**: Docker on Win 10 with WSL, image based on `apache/airflow:2.0.1-python3.8` **What happened**: I'm using the airflow API and the following (what I believe to be a) bug popped up: ```Python >>> import requests >>> r = requests.get("http://localhost:8084/api/v1/dags/~/dagRuns/~/taskInstances", auth=HTTPBasicAuth('username', 'password')) >>> r.status_code 500 >>> print(r.text) { "detail": "'removed' is not one of ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled']\n\nFailed validating 'enum' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['state']:\n {'description': 'Task state.',\n 'enum': ['success',\n 'running',\n 'failed',\n 'upstream_failed',\n 'skipped',\n 'up_for_retry',\n 'up_for_reschedule',\n 'queued',\n 'none',\n 'scheduled'],\n 'nullable': True,\n 'type': 'string',\n 'x-scope': ['',\n '#/components/schemas/TaskInstanceCollection',\n '#/components/schemas/TaskInstance']}\n\nOn instance['task_instances'][16]['state']:\n 'removed'", "status": 500, "title": "Response body does not conform to specification", "type": "https://airflow.apache.org/docs/2.0.1rc2/stable-rest-api-ref.html#section/Errors/Unknown" } >>> print(r.json()["detail"]) 'removed' is not one of ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled'] Failed validating 'enum' in schema['allOf'][0]['properties']['task_instances']['items']['properties']['state']: {'description': 'Task state.', 'enum': ['success', 'running', 'failed', 'upstream_failed', 'skipped', 'up_for_retry', 'up_for_reschedule', 'queued', 'none', 'scheduled'], 'nullable': True, 'type': 'string', 'x-scope': ['', '#/components/schemas/TaskInstanceCollection', '#/components/schemas/TaskInstance']} On instance['task_instances'][16]['state']: 'removed' ``` This happened after I changed a DAG in the corresponding instance, thus a task was removed from a DAG while the DAG was running. **What you expected to happen**: Give me all task instances, whether including the removed ones or not is up to the airflow team to decide (no preferences from my side, though I'd guess it makes more sense to supply all data as it is available). **How to reproduce it**: - Run airflow - Create a DAG with multiple tasks - While the DAG is running, remove one of the tasks (ideally one that did not yet run) - Make the API call as above
https://github.com/apache/airflow/issues/14331
https://github.com/apache/airflow/pull/14381
ea7118316660df43dd0ac0a5e72283fbdf5f2396
7418679591e5df4ceaab6c471bc6d4a975201871
"2021-02-20T13:15:11Z"
python
"2021-03-08T21:24:59Z"
closed
apache/airflow
https://github.com/apache/airflow
14,326
["airflow/kubernetes/pod_generator.py", "tests/kubernetes/test_pod_generator.py"]
Task Instances stuck in "scheduled" state
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): v1.17.12 **Environment**: - **Cloud provider or hardware configuration**: AWS EKS - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: Several Task Instances get `scheduled` but never move to `queued` or `running`. They then become orphan tasks. <!-- (please include exact error messages if you can) --> **What you expected to happen**: Tasks to get scheduled and run :) <!-- What do you think went wrong? --> **How to reproduce it**: <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: I believe the issue is caused by this [limit](https://github.com/apache/airflow/blob/2.0.1/airflow/jobs/scheduler_job.py#L923). If we have more Task Instances than Pool Slots Free then some Task Instances may never show up in this query. <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14326
https://github.com/apache/airflow/pull/14703
b1ce429fee450aef69a813774bf5d3404d50f4a5
b5e7ada34536259e21fca5032ef67b5e33722c05
"2021-02-20T00:11:56Z"
python
"2021-03-26T14:41:18Z"
closed
apache/airflow
https://github.com/apache/airflow
14,299
["airflow/www/templates/airflow/dag_details.html"]
UI: Start Date is incorrect in "DAG Details" view
**Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): Ubuntu - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: The start date in the "DAG Details" view `{AIRFLOW_URL}/dag_details?dag_id={dag_id}` is incorrect if there's a schedule for the DAG. **What you expected to happen**: Start date should be the same as the specified date in the DAG. **How to reproduce it**: For example, I created a DAG with a start date of `2019-07-09` but the DAG details view shows: ![image](https://user-images.githubusercontent.com/12819087/108415107-ee07c900-71e1-11eb-8661-79bd0288291c.png) Minimal code block to reproduce: ``` from datetime import datetime, timedelta START_DATE = datetime(2019, 7, 9) DAG_ID = '*redacted*' dag = DAG( dag_id=DAG_ID, description='*redacted*', catchup=False, start_date=START_DATE, schedule_interval=timedelta(weeks=1), ) start = DummyOperator(task_id='start', dag=dag) ```
https://github.com/apache/airflow/issues/14299
https://github.com/apache/airflow/pull/16206
78c4f1a46ce74f13a99447207f8cdf0fcfc7df95
ebc03c63af7282c9d826054b17fe7ed50e09fe4e
"2021-02-18T20:14:50Z"
python
"2021-06-08T14:13:18Z"
closed
apache/airflow
https://github.com/apache/airflow
14,279
["airflow/providers/amazon/aws/example_dags/example_s3_bucket.py", "airflow/providers/amazon/aws/example_dags/example_s3_bucket_tagging.py", "airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/operators/s3_bucket.py", "airflow/providers/amazon/aws/operators/s3_bucket_tagging.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/s3.rst", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging.py", "tests/providers/amazon/aws/operators/test_s3_bucket_tagging_system.py"]
Add AWS S3 Bucket Tagging Operator
**Description** Add the missing AWS Operators for the three (get/put/delete) AWS S3 bucket tagging APIs, including testing. **Use case / motivation** I am looking to add an Operator that will implement the existing API functionality to manage the tags on an AWS S3 bucket. **Are you willing to submit a PR?** Yes **Related Issues** None that I saw
https://github.com/apache/airflow/issues/14279
https://github.com/apache/airflow/pull/14402
f25ec3368348be479dde097efdd9c49ce56922b3
8ced652ecf847ed668e5eed27e3e47a51a27b1c8
"2021-02-17T17:07:01Z"
python
"2021-02-28T20:50:11Z"
closed
apache/airflow
https://github.com/apache/airflow
14,270
["airflow/task/task_runner/standard_task_runner.py", "tests/task/task_runner/test_standard_task_runner.py"]
Specify that exit code -9 is due to RAM
Related to https://github.com/apache/airflow/issues/9655 It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that. I have found the code where the -9 is assigned but have no idea how to add a logging message. self.process = None if self._rc is None: # Something else reaped it before we had a chance, so let's just "guess" at an error code. self._rc = -9
https://github.com/apache/airflow/issues/14270
https://github.com/apache/airflow/pull/15207
eae22cec9c87e8dad4d6e8599e45af1bdd452062
18e2c1de776c8c3bc42c984ea0d31515788b6572
"2021-02-17T09:01:05Z"
python
"2021-04-06T19:02:11Z"
closed
apache/airflow
https://github.com/apache/airflow
14,252
["airflow/models/baseoperator.py", "tests/core/test_core.py"]
Unable to clear Failed task with retries
**Apache Airflow version**: 2.0.1 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): NA **Environment**: Windows WSL2 (Ubuntu) Local - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): Ubuntu 18.04 - **Kernel** (e.g. `uname -a`): Linux d255bce4dcd5 5.4.72-microsoft-standard-WSL2 - **Install tools**: Docker -compose - **Others**: **What happened**: I have a dag with tasks: Task1 - Get Date Task 2 - Get data from Api call (Have set retires to 3) Task 3 - Load Data Task 2 had failed after three attempts. I am unable to clear the task Instance and get the below error in UI. [Dag Code](https://github.com/anilkulkarni87/airflow-docker/blob/master/dags/covidNyDaily.py) ``` Python version: 3.8.7 Airflow version: 2.0.1rc2 Node: d255bce4dcd5 ------------------------------------------------------------------------------- Traceback (most recent call last): File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/airflow/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise raise value File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/home/airflow/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated return func(*args, **kwargs) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper return f(*args, **kwargs) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1547, in clear return self._clear_dag_tis( File "/home/airflow/.local/lib/python3.8/site-packages/airflow/www/views.py", line 1475, in _clear_dag_tis count = dag.clear( File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 65, in wrapper return func(*args, session=session, **kwargs) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/dag.py", line 1324, in clear clear_task_instances( File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 160, in clear_task_instances ti.max_tries = ti.try_number + task_retries - 1 TypeError: unsupported operand type(s) for +: 'int' and 'str' ``` **What you expected to happen**: I expected to clear the Task Instance so that the task could be scheduled again. **How to reproduce it**: 1) Clone the repo link shared above 2) Follow instructions to setup cluster. 3) Change code to enforce error in Task 2 4) Execute and try to clear task instance after three attempts. ![Error pops up when clicked on Clear](https://user-images.githubusercontent.com/10644132/107998258-8e1ee180-6f99-11eb-8442-0c0be5b23478.png)
https://github.com/apache/airflow/issues/14252
https://github.com/apache/airflow/pull/16415
643f3c35a6ba3def40de7db8e974c72e98cfad44
15ff2388e8a52348afcc923653f85ce15a3c5f71
"2021-02-15T22:27:00Z"
python
"2021-06-13T00:29:14Z"
closed
apache/airflow
https://github.com/apache/airflow
14,202
["chart/templates/scheduler/scheduler-deployment.yaml"]
Scheduler in helm chart cannot access DAG with git sync
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.1 **What happened**: When dags `git-sync` is `true` and `persistent` is `false`, `airflow dags list` returns nothing and the `DAGS Folder` is empty <!-- (please include exact error messages if you can) --> **What you expected to happen**: Scheduler container should still have a volumeMount to read the volume `dags` populated by the `git-sync` container <!-- What do you think went wrong? --> **How to reproduce it**: ``` --set dags.persistence.enabled=false \ --set dags.gitSync.enabled=true \ ``` Scheduler cannot access git-sync DAG as Scheduler's configured `DAGS Folder` path isn't mounted on the volume `dags` <!--- ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14202
https://github.com/apache/airflow/pull/14203
8f21fb1bf77fc67e37dc13613778ff1e6fa87cea
e164080479775aca53146331abf6f615d1f03ff0
"2021-02-12T06:56:10Z"
python
"2021-02-19T01:03:39Z"
closed
apache/airflow
https://github.com/apache/airflow
14,200
["docs/apache-airflow/best-practices.rst", "docs/apache-airflow/security/index.rst", "docs/apache-airflow/security/secrets/secrets-backend/index.rst"]
Update Best practises doc
Update https://airflow.apache.org/docs/apache-airflow/stable/best-practices.html#variables to use Secret Backend (especially Environment Variables) as it asks user not to use Variable in top level
https://github.com/apache/airflow/issues/14200
https://github.com/apache/airflow/pull/17319
bcf719bfb49ca20eea66a2527307968ff290c929
2c1880a90712aa79dd7c16c78a93b343cd312268
"2021-02-11T19:31:08Z"
python
"2021-08-02T20:43:12Z"
closed
apache/airflow
https://github.com/apache/airflow
14,106
["airflow/lineage/__init__.py", "airflow/lineage/backend.py", "docs/apache-airflow/lineage.rst", "tests/lineage/test_lineage.py"]
Lineage Backend removed for no reason
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** The possibility to add a lineage backend was removed in https://github.com/apache/airflow/pull/6564 but was never reintroduced. Now that this code is in 2.0, the lineage information is only in the xcoms and the only way to get it is through an experimental API that isn't very practical either. **Use case / motivation** A custom callback at the time lineage gets pushed is enough to send the lineage information to whatever lineage backend the user has. **Are you willing to submit a PR?** I'd be willing to make a PR recovering the LineageBackend and add changes if needed, unless there is a different plan for lineage from the maintainers.
https://github.com/apache/airflow/issues/14106
https://github.com/apache/airflow/pull/14146
9ac1d0a3963b0e152cb2ba4a58b14cf6b61a73a0
af2d11e36ed43b0103a54780640493b8ae46d70e
"2021-02-05T16:47:46Z"
python
"2021-04-03T08:26:59Z"
closed
apache/airflow
https://github.com/apache/airflow
14,104
["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg"]
BACKEND: Unbound Variable issue in docker entrypoint
This is NOT a bug in Airflow, I'm writing this issue for documentation should someone come across this same issue and need to identify how to solve it. Please tag as appropriate. **Apache Airflow version**: Docker 2.0.1rc2 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A **Environment**: Dockered - **Cloud provider or hardware configuration**: VMWare VM - **OS** (e.g. from /etc/os-release): Ubuntu 18.04.5 LTS - **Kernel** (e.g. `uname -a`): 4.15.0-128-generic - **Install tools**: Just docker/docker-compose - **Others**: **What happened**: Worker, webserver and scheduler docker containers do not start, errors: <details><summary>/entrypoint: line 71: BACKEND: unbound variable</summary> af_worker | /entrypoint: line 71: BACKEND: unbound variable af_worker | /entrypoint: line 71: BACKEND: unbound variable af_worker exited with code 1 af_webserver | /entrypoint: line 71: BACKEND: unbound variable af_webserver | /entrypoint: line 71: BACKEND: unbound variable af_webserver | /entrypoint: line 71: BACKEND: unbound variable af_webserver | /entrypoint: line 71: BACKEND: unbound variable </details> **What you expected to happen**: Docker containers to start **How to reproduce it**: What ever docker-compose file I was copying, has a MySQL Connection String not compatible with: https://github.com/apache/airflow/blob/bc026cf6961626dd01edfaf064562bfb1f2baf42/scripts/in_container/prod/entrypoint_prod.sh#L58 -- Specifically, the connection string in the docker-compose did not have a password, and no : separator for a blank password. Original Connection String: `mysql://root@mysql/airflow?charset=utf8mb4` **Anything else we need to know**: The solution is to use a password, or at the very least add the : to the user:password section Fixed Connection String: `mysql://root:@mysql/airflow?charset=utf8mb4`
https://github.com/apache/airflow/issues/14104
https://github.com/apache/airflow/pull/14124
d77f79d134e0d14443f75325b24dffed4b779920
b151b5eea5057f167bf3d2f13a84ab4eb8e42734
"2021-02-05T15:31:07Z"
python
"2021-03-22T15:42:37Z"
closed
apache/airflow
https://github.com/apache/airflow
14,097
["UPDATING.md", "airflow/contrib/sensors/gcs_sensor.py", "airflow/providers/google/BACKPORT_PROVIDER_README.md", "airflow/providers/google/cloud/sensors/gcs.py", "tests/always/test_project_structure.py", "tests/deprecated_classes.py", "tests/providers/google/cloud/sensors/test_gcs.py"]
Typo in Sensor: GCSObjectsWtihPrefixExistenceSensor (should be GCSObjectsWithPrefixExistenceSensor)
Typo in Google Cloud Storage sensor: airflow/providers/google/cloud/sensors/gcs/GCSObjectsWithPrefixExistenceSensor The word _With_ is spelled incorrectly. It should be: GCSObjects**With**PrefixExistenceSensor **Apache Airflow version**: 2.0.0 **Environment**: - **Cloud provider or hardware configuration**: Google Cloud - **OS** (e.g. from /etc/os-release): Mac OS BigSur
https://github.com/apache/airflow/issues/14097
https://github.com/apache/airflow/pull/14179
6dc6339635f41a9fa50a987c4fdae5af0bae9fdc
e3bcaa3ba351234effe52ad380345c4e39003fcb
"2021-02-05T12:13:09Z"
python
"2021-02-12T20:14:00Z"
closed
apache/airflow
https://github.com/apache/airflow
14,077
["airflow/providers/google/marketing_platform/hooks/display_video.py", "airflow/providers/google/marketing_platform/operators/display_video.py", "tests/providers/google/marketing_platform/hooks/test_display_video.py", "tests/providers/google/marketing_platform/operators/test_display_video.py"]
GoogleDisplayVideo360Hook.download_media does not pass the resourceName correctly
**Apache Airflow version**: 1.10.12 **Environment**: Google Cloud Composer 1.13.3 - **Cloud provider or hardware configuration**: - Google Cloud Composer **What happened**: The GoogleDisplayVideo360Hook.download_media hook tries to download media using the "resource_name" argument. However, [per the API spec](https://developers.google.com/display-video/api/reference/rest/v1/media/download) it should pass "resourceName" Thus, it breaks every time and can never download media. Error: `ERROR - Got an unexpected keyword argument "resource_name"` **What you expected to happen**: The hook should pass in the correct resourceName and then download the media file. **How to reproduce it**: Run any workflow that tries to download any DV360 media. **Anything else we need to know**: I have written a patch that fixes the issue and will submit it shortly.
https://github.com/apache/airflow/issues/14077
https://github.com/apache/airflow/pull/20528
af4a2b0240fbf79a0a6774a9662243050e8fea9c
a6e60ce25d9f3d621a7b4089834ca5e50cd123db
"2021-02-04T16:35:25Z"
python
"2021-12-30T12:48:55Z"
closed
apache/airflow
https://github.com/apache/airflow
14,071
["airflow/providers/jenkins/operators/jenkins_job_trigger.py", "tests/providers/jenkins/operators/test_jenkins_job_trigger.py"]
Add support for UNSTABLE Jenkins status
**Description** Don't mark dag as `failed` when `UNSTABLE` status received from Jenkins. It can be done by adding `allow_unstable: bool` or `success_status_values: list` parameter to `JenkinsJobTriggerOperator.__init__`. For now `SUCCESS` status is hardcoded, any other lead to fail. **Use case / motivation** I want to restart a job (`retries` parameter) only if I get `FAILED` status. `UNSTABLE` is okay for me and it's no need to restart. **Are you willing to submit a PR?** Yes **Related Issues** No
https://github.com/apache/airflow/issues/14071
https://github.com/apache/airflow/pull/14131
f180fa13bf2a0ffa31b30bb21468510fe8a20131
78adaed5e62fa604d2ef2234ad560eb1c6530976
"2021-02-04T15:20:47Z"
python
"2021-02-08T21:43:39Z"
closed
apache/airflow
https://github.com/apache/airflow
14,051
["docs/build_docs.py", "docs/exts/docs_build/spelling_checks.py", "docs/spelling_wordlist.txt"]
Docs Builder creates SpellingError for Sphinx error unrelated to spelling issues
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> <!-- IMPORTANT!!! PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE NEXT TO "SUBMIT NEW ISSUE" BUTTON!!! PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!! Please complete the next sections or the issue will be closed. These questions are the first thing we need to know to understand the context. --> **Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): n/a **Environment**: - **Cloud provider or hardware configuration**: n/a - **OS** (e.g. from /etc/os-release): n/a - **Kernel** (e.g. `uname -a`): n/a - **Install tools**: n/a - **Others**: n/a **What happened**: A sphinx warning unrelated to spelling issues running `sphinx-build` resulted in an instance of `SpellingError` to cause a docs build failure. ``` SpellingError( file_path=None, line_no=None, spelling=None, suggestion=None, context_line=None, message=( f"Sphinx spellcheck returned non-zero exit status: {completed_proc.returncode}." ) ) # sphinx.errors.SphinxWarning: /opt/airflow/docs/apache-airflow-providers-google/_api/drive/index.rst:document isn't included in any toctree ``` The actual issue was that I failed to include an `__init__.py` file in a directory that I created. <!-- (please include exact error messages if you can) --> **What you expected to happen**: I think an exception should be raised unrelated to a spelling error. Preferably one that would indicate that there's a directory that's missing an init file, but at least a generic error unrelated to spelling <!-- What do you think went wrong? --> **How to reproduce it**: Create a new plugin directory (e.g. `airflow/providers/google/suite/sensors`) and don't include an `__init__.py` file, and run `./breeze build-docs -- --docs-only -v` <!--- As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags. If you are using kubernetes, please attempt to recreate the issue using minikube or kind. ## Install minikube/kind - Minikube https://minikube.sigs.k8s.io/docs/start/ - Kind https://kind.sigs.k8s.io/docs/user/quick-start/ If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action You can include images using the .md style of ![alt text](http://url/to/img.png) To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file. ---> **Anything else we need to know**: I'm specifically referring to lines 139 to 150 in `docs/exts/docs_build/docs_builder.py` <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14051
https://github.com/apache/airflow/pull/14196
e31b27d593f7379f38ced34b6e4ce8947b91fcb8
cb4a60e9d059eeeae02909bb56a348272a55c233
"2021-02-03T16:46:25Z"
python
"2021-02-12T23:46:23Z"
closed
apache/airflow
https://github.com/apache/airflow
14,050
["airflow/jobs/scheduler_job.py", "airflow/serialization/serialized_objects.py", "tests/jobs/test_scheduler_job.py", "tests/serialization/test_dag_serialization.py"]
SLA mechanism does not work
**Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: I have the following DAG: ```py from datetime import datetime, timedelta from airflow import DAG from airflow.operators.bash_operator import BashOperator with DAG( dag_id="sla_trigger", schedule_interval="* * * * *", start_date=datetime(2020, 2, 3), ) as dag: BashOperator( task_id="bash_task", bash_command="sleep 30", sla=timedelta(seconds=2), ) ``` And in my understanding this dag should result in SLA miss every time it is triggered (every minute). However, after few minutes of running I don't see any SLA misses... **What you expected to happen**: I expect to see SLA if task takes longer than expected. **How to reproduce it**: Use the dag from above. **Anything else we need to know**: N/A
https://github.com/apache/airflow/issues/14050
https://github.com/apache/airflow/pull/14056
914e9ce042bf29dc50d410f271108b1e42da0add
604a37eee50715db345c5a7afed085c9afe8530d
"2021-02-03T14:58:32Z"
python
"2021-02-04T01:59:31Z"
closed
apache/airflow
https://github.com/apache/airflow
14,046
["airflow/www/templates/airflow/tree.html"]
Day change flag is in wrong place
**Apache Airflow version**: 2.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: In tree view, the "day marker" is shifted and one last dagrun of previous day is included in new day. See: <img width="398" alt="Screenshot 2021-02-03 at 14 14 55" src="https://user-images.githubusercontent.com/9528307/106752180-7014c100-662a-11eb-9342-661a237ed66c.png"> The tooltip is on 4th dagrun, but the day flag in the same line as the 3rd one. **What you expected to happen**: I expect the to see the day flag between two days not earlier. **How to reproduce it**: Create a DAG with `schedule_interval="5 8-23/1 * * *"` **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14046
https://github.com/apache/airflow/pull/14141
0f384f0644c8cfe55ca4c75d08b707be699b440f
6dc6339635f41a9fa50a987c4fdae5af0bae9fdc
"2021-02-03T13:19:58Z"
python
"2021-02-12T18:50:02Z"
closed
apache/airflow
https://github.com/apache/airflow
14,010
["airflow/www/templates/airflow/task.html"]
Order of items not preserved in Task instance view
**Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): **Environment**: - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Install tools**: - **Others**: **What happened**: The order of items is not preserved in Task Instance information: <img width="542" alt="Screenshot 2021-02-01 at 16 49 09" src="https://user-images.githubusercontent.com/9528307/106482104-6a45a100-64ad-11eb-8d2f-e478c267bce9.png"> <img width="542" alt="Screenshot 2021-02-01 at 16 49 43" src="https://user-images.githubusercontent.com/9528307/106482167-7df10780-64ad-11eb-9434-ba3e54d56dec.png"> **What you expected to happen**: I expect that the order will be always the same. Otherwise the UX is bad. **How to reproduce it**: Seems to happen randomly. But once seen the order is then consistent for given TI. **Anything else we need to know**: <!-- How often does this problem occur? Once? Every time etc? Any relevant logs to include? Put them here in side a detail tag: <details><summary>x.log</summary> lots of stuff </details> -->
https://github.com/apache/airflow/issues/14010
https://github.com/apache/airflow/pull/14036
68758b826076e93fadecf599108a4d304dd87ac7
fc67521f31a0c9a74dadda8d5f0ac32c07be218d
"2021-02-01T15:51:38Z"
python
"2021-02-05T15:38:13Z"
closed
apache/airflow
https://github.com/apache/airflow
13,989
["airflow/providers/telegram/operators/telegram.py", "tests/providers/telegram/operators/test_telegram.py"]
AttributeError: 'TelegramOperator' object has no attribute 'text'
Hi there 👋 I was playing with the **TelegramOperator** and stumbled upon a bug with the `text` field. It is supposed to be a template field but in reality the instance of the **TelegramOperator** does not have this attribute thus every time I try to execute code I get the error: > AttributeError: 'TelegramOperator' object has no attribute 'text' ```python TelegramOperator( task_id='send_message_telegram', telegram_conn_id='telegram_conn_id', text='Hello from Airflow!' ) ```
https://github.com/apache/airflow/issues/13989
https://github.com/apache/airflow/pull/13990
9034f277ef935df98b63963c824ba71e0dcd92c7
106d2c85ec4a240605830bf41962c0197b003135
"2021-01-30T19:25:35Z"
python
"2021-02-10T12:06:04Z"
closed
apache/airflow
https://github.com/apache/airflow
13,985
["airflow/www/static/js/connection_form.js"]
Can't save any connection if provider-provided connection form widgets have fields marked as InputRequired
**Apache Airflow version**: 2.0.0 with the following patch: https://github.com/apache/airflow/pull/13640 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): N/A **Environment**: - **Cloud provider or hardware configuration**: AMD Ryzen 3900X (12C/24T), 64GB RAM - **OS** (e.g. from /etc/os-release): Ubuntu 20.04.1 LTS - **Kernel** (e.g. `uname -a`): 5.9.8-050908-generic - **Install tools**: N/A - **Others**: N/A **What happened**: If there are custom hooks that implement the `get_connection_form_widgets` method that return fields using the `InputRequired` validator, saving breaks for all types of connections on the "Edit Connections" page. In Chrome, the following message is logged to the browser console: ``` An invalid form control with name='extra__hook_name__field_name' is not focusable. ``` This happens because the field is marked as `<input required>` but is hidden using CSS when the connection type exposed by the custom hook is not selected. **What you expected to happen**: Should be able to save other types of connections. In particular, either one of the following should happen: 1. The fields not belonging to the currently selected connection type should not just be hidden using CSS, but should be removed from the DOM entirely. 2. Remove the `required` attribute if the form field is hidden. **How to reproduce it**: Create a provider, and add a hook with something like: ```python @staticmethod def get_connection_form_widgets() -> Dict[str, Any]: """Returns connection widgets to add to connection form.""" return { 'extra__my_hook__client_id': StringField( lazy_gettext('OAuth2 Client ID'), widget=BS3TextFieldWidget(), validators=[wtforms.validators.InputRequired()], ), } ``` Go to the Airflow Web UI, click the "Add" button in the connection list page, then choose a connection type that's not the type exposed by the custom hook. Fill in details and click "Save". **Anything else we need to know**: N/A
https://github.com/apache/airflow/issues/13985
https://github.com/apache/airflow/pull/14052
f9c9e9c38f444a39987478f3d1a262db909de8c4
98bbe5aec578a012c1544667bf727688da1dadd4
"2021-01-30T16:21:53Z"
python
"2021-02-11T13:59:21Z"
closed
apache/airflow
https://github.com/apache/airflow
13,924
["scripts/in_container/_in_container_utils.sh"]
Improve error messages and propagation in CI builds
Airflow version: dev The error information in `Backport packages: wheel` is not that easy to find. Here is the end of the step that failed and end of its log: <img width="1151" alt="Screenshot 2021-01-27 at 12 02 01" src="https://user-images.githubusercontent.com/9528307/105982515-aa64e800-6097-11eb-91c8-9911448d1301.png"> but in fact the error happen some 500 lines earlier: <img width="1151" alt="Screenshot 2021-01-27 at 12 01 47" src="https://user-images.githubusercontent.com/9528307/105982504-a769f780-6097-11eb-8873-02c1d9b2d670.png"> **What you expect to happen?** I would expect that the error is at the end of the step. Otherwise the message `The previous step completed with error. Please take a look at output above ` is slightly miss-leading.
https://github.com/apache/airflow/issues/13924
https://github.com/apache/airflow/pull/15190
041a09f3ee6bc447c3457b108bd5431a2fd70ad9
7c17bf0d1e828b454a6b2c7245ded275b313c792
"2021-01-27T11:07:09Z"
python
"2021-04-04T20:20:11Z"
closed
apache/airflow
https://github.com/apache/airflow
13,918
["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"]
KubernetesPodOperator with pod_template_file = No Metadata & Wrong Pod Name
**Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** 1.15.15 **What happened**: If you use the **KubernetesPodOperator** with **LocalExecutor** and you use a **pod_template_file**, the pod created doesn't have metadata like : - dag_id - task_id - ... I want to have a ``privileged_escalation=True`` pod, launched by a KubernetesPodOperator but without the KubernetesExecutor. Is it possible ? **What you expected to happen**: Have the pod launched with privileged escalation & metadata & correct pod-name override. **How to reproduce it**: * have a pod template file : **privileged_runner.yaml** : ```yaml apiVersion: v1 kind: Pod metadata: name: privileged-pod spec: containers: - name: base securityContext: allowPrivilegeEscalation: true privileged: true ``` * have a DAG file with KubernetesOperator in it : **my-dag.py** : ```python ##=========================================================================================## ## CONFIGURATION from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator from airflow.operators.dummy_operator import DummyOperator from airflow.kubernetes.secret import Secret from kubernetes.client import models as k8s from airflow.models import Variable from datetime import datetime, timedelta from airflow import DAG env = Variable.get("process_env") namespace = Variable.get("namespace") default_args = { 'owner': 'airflow', 'depends_on_past': False, 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) } ##==============================## ## Définition du DAG dag = DAG( 'transfert-files-to-nexus', start_date=datetime.utcnow(), schedule_interval="0 2 * * *", default_args=default_args, max_active_runs=1 ) ##=========================================================================================## ## Définition des tâches start = DummyOperator(task_id='start', dag=dag) end = DummyOperator(task_id='end', dag=dag) transfertfile = KubernetesPodOperator(namespace=namespace, task_id="transfertfile", name="transfertfile", image="registrygitlab.fr/docker-images/python-runner:1.8.22", image_pull_secrets="registrygitlab-curie", pod_template_file="/opt/bitnami/airflow/dags/git-airflow-dags/privileged_runner.yaml", is_delete_operator_pod=False, get_logs=True, dag=dag) ## Enchainement des tâches start >> transfertfile >> end ``` **Anything else we need to know**: I know that we have to use the ``KubernetesExecutor`` in order to have the **metadata**, but even if you use the ``KubernetesExecutor``, the fact that you have to use the **pod_template_file** for the ``KubernetesPodOperator`` makes no change, because in either ``LocalExecutor`` / ``KubernetesExecutor``you will endup with no pod name override correct & metadata.
https://github.com/apache/airflow/issues/13918
https://github.com/apache/airflow/pull/15492
def1e7c5841d89a60f8972a84b83fe362a6a878d
be421a6b07c2ae9167150b77dc1185a94812b358
"2021-01-26T20:27:09Z"
python
"2021-04-23T22:54:43Z"
closed
apache/airflow
https://github.com/apache/airflow
13,905
["setup.py"]
DockerOperator fails to pull an image
**Apache Airflow version**: 2.0 **Environment**: - **OS** (from /etc/os-release): Debian GNU/Linux 10 (buster) - **Kernel** (`uname -a`): Linux 37365fa0b59b 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 GNU/Linux - **Others**: running inside a docker container, forked puckel/docker-airflow **What happened**: `DockerOperator` does not attempt to pull an image unless force_pull is set to True, instead displaying a misleading 404 error. **What you expected to happen**: `DockerOperator` should attempt to pull an image when it is not present locally. **How to reproduce it**: Make sure you don't have an image tagged `debian:buster-slim` present locally. ``` DockerOperator( task_id=f'try_to_pull_debian', image='debian:buster-slim', command=f'''echo hello''', force_pull=False ) ``` prints: `{taskinstance.py:1396} ERROR - 404 Client Error: Not Found ("No such image: ubuntu:latest")` This, on the other hand: ``` DockerOperator( task_id=f'try_to_pull_debian', image='debian:buster-slim', command=f'''echo hello''', force_pull=True ) ``` pulls the image and prints `{docker.py:263} INFO - hello` **Anything else we need to know**: I overrode `DockerOperator` to track down what I was doing wrong and found the following: When trying to run an image that's not present locally, `self.cli.images(name=self.image)` in the line: https://github.com/apache/airflow/blob/8723b1feb82339d7a4ba5b40a6c4d4bbb995a4f9/airflow/providers/docker/operators/docker.py#L286 returns a non-empty array even when the image has been deleted from the local machine. In fact, `self.cli.images` appears to return non-empty arrays even when supplied with nonsense image names. <details><summary>force_pull_false.log</summary> [2021-01-27 06:15:28,987] {__init__.py:124} DEBUG - Preparing lineage inlets and outlets [2021-01-27 06:15:28,987] {__init__.py:168} DEBUG - inlets: [], outlets: [] [2021-01-27 06:15:28,987] {config.py:21} DEBUG - Trying paths: ['/usr/local/airflow/.docker/config.json', '/usr/local/airflow/.dockercfg'] [2021-01-27 06:15:28,987] {config.py:25} DEBUG - Found file at path: /usr/local/airflow/.docker/config.json [2021-01-27 06:15:28,987] {auth.py:182} DEBUG - Found 'auths' section [2021-01-27 06:15:28,988] {auth.py:142} DEBUG - Found entry (registry='https://index.docker.io/v1/', username='xxxxxxx') [2021-01-27 06:15:29,015] {connectionpool.py:433} DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 851 [2021-01-27 06:15:29,060] {connectionpool.py:433} DEBUG - http://localhost:None "GET /v1.41/images/json?filter=debian%3Abuster-slim&only_ids=0&all=0 HTTP/1.1" 200 None [2021-01-27 06:15:29,060] {docker.py:224} INFO - Starting docker container from image debian:buster-slim [2021-01-27 06:15:29,063] {connectionpool.py:433} DEBUG - http://localhost:None "POST /v1.41/containers/create HTTP/1.1" 404 48 [2021-01-27 06:15:29,063] {taskinstance.py:1396} ERROR - 404 Client Error: Not Found ("No such image: debian:buster-slim") Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/create During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1086, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1260, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/usr/local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1300, in _execute_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 305, in execute return self._run_image() File "/usr/local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 231, in _run_image self.container = self.cli.create_container( File "/usr/local/lib/python3.8/site-packages/docker/api/container.py", line 427, in create_container return self.create_container_from_config(config, name) File "/usr/local/lib/python3.8/site-packages/docker/api/container.py", line 438, in create_container_from_config return self._result(res, True) File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 267, in _result self._raise_for_status(response) File "/usr/local/lib/python3.8/site-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/usr/local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.ImageNotFound: 404 Client Error: Not Found ("No such image: debian:buster-slim") </details> <details><summary>force_pull_true.log</summary> [2021-01-27 06:17:01,811] {__init__.py:124} DEBUG - Preparing lineage inlets and outlets [2021-01-27 06:17:01,811] {__init__.py:168} DEBUG - inlets: [], outlets: [] [2021-01-27 06:17:01,811] {config.py:21} DEBUG - Trying paths: ['/usr/local/airflow/.docker/config.json', '/usr/local/airflow/.dockercfg'] [2021-01-27 06:17:01,811] {config.py:25} DEBUG - Found file at path: /usr/local/airflow/.docker/config.json [2021-01-27 06:17:01,811] {auth.py:182} DEBUG - Found 'auths' section [2021-01-27 06:17:01,812] {auth.py:142} DEBUG - Found entry (registry='https://index.docker.io/v1/', username='xxxxxxxxx') [2021-01-27 06:17:01,825] {connectionpool.py:433} DEBUG - http://localhost:None "GET /version HTTP/1.1" 200 851 [2021-01-27 06:17:01,826] {docker.py:287} INFO - Pulling docker image debian:buster-slim [2021-01-27 06:17:01,826] {auth.py:41} DEBUG - Looking for auth config [2021-01-27 06:17:01,826] {auth.py:242} DEBUG - Looking for auth entry for 'docker.io' [2021-01-27 06:17:01,826] {auth.py:250} DEBUG - Found 'https://index.docker.io/v1/' [2021-01-27 06:17:01,826] {auth.py:54} DEBUG - Found auth config [2021-01-27 06:17:04,399] {connectionpool.py:433} DEBUG - http://localhost:None "POST /v1.41/images/create?tag=buster-slim&fromImage=debian HTTP/1.1" 200 None [2021-01-27 06:17:04,400] {docker.py:301} INFO - buster-slim: Pulling from library/debian [2021-01-27 06:17:04,982] {docker.py:301} INFO - a076a628af6f: Pulling fs layer [2021-01-27 06:17:05,884] {docker.py:301} INFO - a076a628af6f: Downloading [2021-01-27 06:17:11,429] {docker.py:301} INFO - a076a628af6f: Verifying Checksum [2021-01-27 06:17:11,429] {docker.py:301} INFO - a076a628af6f: Download complete [2021-01-27 06:17:11,480] {docker.py:301} INFO - a076a628af6f: Extracting </details>
https://github.com/apache/airflow/issues/13905
https://github.com/apache/airflow/pull/15731
7933aaf07f5672503cfd83361b00fda9d4c281a3
41930fdebfaa7ed2c53e7861c77a83312ca9bdc4
"2021-01-26T05:49:03Z"
python
"2021-05-09T21:05:49Z"
closed
apache/airflow
https://github.com/apache/airflow
13,891
["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/migrations/versions/2c6edca13270_resource_based_permissions.py", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "docs/apache-airflow/security/access-control.rst", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/www/test_views.py"]
RBAC Granular DAG Permissions don't work as intended
Previous versions (before 2.0) allowed for granular can_edit DAG permissions so that different user groups can trigger different DAGs and access control is more specific. Since 2.0 it seems that this does not work as expected. How to reproduce: Create a copy of the VIEWER role, try adding it can dag edit on a specific DAG. **Expected Result:** user can trigger said DAG. **Actual Result:** user access is denied. It seems to be a new parameter was added: **can create on DAG runs** and without it the user cannot run DAGs, however, with it, the user can run all DAGs without limitations and I believe this is an unintended use.
https://github.com/apache/airflow/issues/13891
https://github.com/apache/airflow/pull/13922
568327f01a39d6f181dda62ef6a143f5096e6b97
629abfdbab23da24ca45996aaaa6e3aa094dd0de
"2021-01-25T13:55:12Z"
python
"2021-02-03T03:16:18Z"
closed
apache/airflow
https://github.com/apache/airflow
13,805
["airflow/cli/commands/task_command.py"]
Could not get scheduler_job_id
**Apache Airflow version:** 2.0.0 **Kubernetes version (if you are using kubernetes) (use kubectl version):** 1.18.3 **Environment:** Cloud provider or hardware configuration: AWS **What happened:** When trying to run a DAG, it gets scheduled, but task is never run. When attempting to run task manually, it shows an error: ``` Something bad has happened. Please consider letting us know by creating a bug report using GitHub. Python version: 3.8.7 Airflow version: 2.0.0 Node: airflow-web-ffdd89d6-h98vj ------------------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/local/lib/python3.8/site-packages/airflow/www/auth.py", line 34, in decorated return func(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/airflow/www/decorators.py", line 60, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/airflow/www/views.py", line 1366, in run executor.start() File "/usr/local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 493, in start raise AirflowException("Could not get scheduler_job_id") airflow.exceptions.AirflowException: Could not get scheduler_job_id ``` **What you expected to happen:** The task to be run successfully without **How to reproduce it:** Haven't pinpointed what causes the issue, besides an attempted upgrade from Airflow 1.10.14 to Airflow 2.0.0 **Anything else we need to know:** This error is encountered in an upgrade of Airflow from 1.10.14 to Airflow 2.0.0 EDIT: Formatted to fit the issue template
https://github.com/apache/airflow/issues/13805
https://github.com/apache/airflow/pull/16108
436e0d096700c344e7099693d9bf58e12658f9ed
cdc9f1a33854254607fa81265a323cf1eed6d6bb
"2021-01-21T10:09:05Z"
python
"2021-05-27T12:50:03Z"
closed
apache/airflow
https://github.com/apache/airflow
13,799
["airflow/migrations/versions/8646922c8a04_change_default_pool_slots_to_1.py", "airflow/models/taskinstance.py"]
Scheduler crashes when unpausing some dags with: TypeError: '>' not supported between instances of 'NoneType' and 'int'
**Apache Airflow version**: 2.0.0 **Kubernetes version (if you are using kubernetes)** (use `kubectl version`): 1.15 **Environment**: - **Cloud provider or hardware configuration**: GKE - **OS** (e.g. from /etc/os-release): Ubuntu 18.04 **What happened**: I just migrated from 1.10.14 to 2.0.0. When I turn on some random dags, the scheduler crashes with the following error: ```python Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1275, in _execute self._run_scheduler_loop() File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1377, in _run_scheduler_loop num_queued_tis = self._do_scheduling(session) File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1533, in _do_scheduling num_queued_tis = self._critical_section_execute_task_instances(session=session) File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1132, in _critical_section_execute_task_instances queued_tis = self._executable_task_instances_to_queued(max_tis, session=session) File "/usr/local/lib/python3.6/dist-packages/airflow/utils/session.py", line 62, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/airflow/jobs/scheduler_job.py", line 1034, in _executable_task_instances_to_queued if task_instance.pool_slots > open_slots: TypeError: '>' not supported between instances of 'NoneType' and 'int' ``` **What you expected to happen**: I expected those dags would have their tasks scheduled without problems. **How to reproduce it**: Can't reproduce it yet. Still trying to figure out if this happens only with specific dags or not. **Anything else we need to know**: I couldn't find in which context `task_instance.pool_slots` could be None
https://github.com/apache/airflow/issues/13799
https://github.com/apache/airflow/pull/14406
c069e64920da780237a1e1bdd155319b007a2587
f763b7c3aa9cdac82b5d77e21e1840fbe931257a
"2021-01-20T22:08:00Z"
python
"2021-02-25T02:56:40Z"
closed
apache/airflow
https://github.com/apache/airflow
13,774
["airflow/providers/amazon/aws/operators/s3_copy_object.py"]
add acl_policy to S3CopyObjectOperator
<!-- Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions. Don't worry if they're not all applicable; just try to include what you can :-) If you need to include code snippets or logs, please put them in fenced code blocks. If they're super-long, please use the details tag like <details><summary>super-long log</summary> lots of stuff </details> Please delete these comment blocks before submitting the issue. --> **Description** <!-- A short description of your feature --> **Use case / motivation** <!-- What do you want to happen? Rather than telling us how you might implement this solution, try to take a step back and describe what you are trying to achieve. --> **Are you willing to submit a PR?** <!--- We accept contributions! --> **Related Issues** <!-- Is there currently another issue associated with this? -->
https://github.com/apache/airflow/issues/13774
https://github.com/apache/airflow/pull/13773
9923d606d2887c52390a30639fc1ee0d4000149c
29730d720066a4c16d524e905de8cdf07e8cd129
"2021-01-19T21:53:18Z"
python
"2021-01-20T15:16:25Z"
closed
apache/airflow
https://github.com/apache/airflow
13,761
["airflow/example_dags/tutorial.py", "airflow/models/baseoperator.py", "airflow/serialization/schema.json", "airflow/www/utils.py", "airflow/www/views.py", "docs/apache-airflow/concepts.rst", "tests/serialization/test_dag_serialization.py", "tests/www/test_utils.py"]
Markdown from doc_md is not being rendered in ui
**Apache Airflow version**: 1.10.14 **Environment**: - **Cloud provider or hardware configuration**: docker - **OS** (e.g. from /etc/os-release): apache/airflow:1.10.14-python3.8 - **Kernel** (e.g. `uname -a`): Linux host 5.4.0-62-generic #70-Ubuntu SMP Tue Jan 12 12:45:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux - **Install tools**: Docker version 19.03.8, build afacb8b7f0 - **Others**: **What happened**: I created a DAG and set the doc_md property on the object but it isn't being rendered in the UI. **What you expected to happen**: I expected the markdown to be rendered in the UI **How to reproduce it**: Created a new container using the `airflow:1.10.14`, I have tried the following images with the same results. - airflow:1.10.14:image-python3.8 - airflow:1.10.14:image-python3.7 - airflow:1.10.12:image-python3.7 - airflow:1.10.12:image-python3.7 ``` dag_docs = """ ## Pipeline #### Purpose This is a pipeline """ dag = DAG( 'etl-get_from_api', default_args=default_args, description='A simple dag', schedule_interval=timedelta(days=1), ) dag.doc_md = dag_docs ``` ![image](https://user-images.githubusercontent.com/29732449/105004686-6b77d680-5a88-11eb-9e34-c8dd38b3fd10.png) ![image](https://user-images.githubusercontent.com/29732449/105004748-7af71f80-5a88-11eb-811c-11bc6a351c71.png) I have also tried with using a doc-string to populate the doc_md as well as adding some text within the constructor. ``` dag = DAG( 'etl-get_from_api', default_args=default_args, description='A simple dag', schedule_interval=timedelta(days=1), doc_md = "some text" ) ``` All of the different permutations I've tried seem to have the same result. The only thing I can change is the description, that appears to show up correctly. **Anything else we need to know**: I have tried multiple browsers (Firefox and Chrome) and I have also done an inspect on from both the graph view and the tree view from within the dag but I can't find any of the text within the page at all.
https://github.com/apache/airflow/issues/13761
https://github.com/apache/airflow/pull/15191
7c17bf0d1e828b454a6b2c7245ded275b313c792
e86f5ca8fa5ff22c1e1f48addc012919034c672f
"2021-01-19T08:10:12Z"
python
"2021-04-05T02:46:41Z"